829 resultados para Social hypothesis testing


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Monte Carlo simulation has been conducted to investigate parameter estimation and hypothesis testing in some well known adaptive randomization procedures. The four urn models studied are Randomized Play-the-Winner (RPW), Randomized Pôlya Urn (RPU), Birth and Death Urn with Immigration (BDUI), and Drop-the-Loses Urn (DL). Two sequential estimation methods, the sequential maximum likelihood estimation (SMLE) and the doubly adaptive biased coin design (DABC), are simulated at three optimal allocation targets that minimize the expected number of failures under the assumption of constant variance of simple difference (RSIHR), relative risk (ORR), and odds ratio (OOR) respectively. Log likelihood ratio test and three Wald-type tests (simple difference, log of relative risk, log of odds ratio) are compared in different adaptive procedures. ^ Simulation results indicates that although RPW is slightly better in assigning more patients to the superior treatment, the DL method is considerably less variable and the test statistics have better normality. When compared with SMLE, DABC has slightly higher overall response rate with lower variance, but has larger bias and variance in parameter estimation. Additionally, the test statistics in SMLE have better normality and lower type I error rate, and the power of hypothesis testing is more comparable with the equal randomization. Usually, RSIHR has the highest power among the 3 optimal allocation ratios. However, the ORR allocation has better power and lower type I error rate when the log of relative risk is the test statistics. The number of expected failures in ORR is smaller than RSIHR. It is also shown that the simple difference of response rates has the worst normality among all 4 test statistics. The power of hypothesis test is always inflated when simple difference is used. On the other hand, the normality of the log likelihood ratio test statistics is robust against the change of adaptive randomization procedures. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Interim clinical trial monitoring procedures were motivated by ethical and economic considerations. Classical Brownian motion (Bm) techniques for statistical monitoring of clinical trials were widely used. Conditional power argument and α-spending function based boundary crossing probabilities are popular statistical hypothesis testing procedures under the assumption of Brownian motion. However, it is not rare that the assumptions of Brownian motion are only partially met for trial data. Therefore, I used a more generalized form of stochastic process, called fractional Brownian motion (fBm), to model the test statistics. Fractional Brownian motion does not hold Markov property and future observations depend not only on the present observations but also on the past ones. In this dissertation, we simulated a wide range of fBm data, e.g., H = 0.5 (that is, classical Bm) vs. 0.5< H <1, with treatment effects vs. without treatment effects. Then the performance of conditional power and boundary-crossing based interim analyses were compared by assuming that the data follow Bm or fBm. Our simulation study suggested that the conditional power or boundaries under fBm assumptions are generally higher than those under Bm assumptions when H > 0.5 and also matches better with the empirical results. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper, investigates causal relationships among agriculture, manufacturing and export in Tanzania by using time series data for the period between 1970 and 2005. The empirical results show in both sectors there is Granger causality where agriculture causes both exports and manufacturing. Exports also cause both agricultural GDP and manufacturing GDP and any two variables out of three jointly cause the third one. There is also some evidence that manufacturing does not cause export and agriculture. Regarding cointegration, pairwise agricultural GDP and export are cointegrated, export and manufacture are cointegrated. Agriculture and manufacture are cointegrated but they are lag sensitive. However, three variables, manufacturing, export and agriculture both together are cointegrated showing that they share long run relation and this has important economic implications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. In addition to recording TOD, the cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also identified for use as the independent variables in the regression analysis. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajectory parame- ters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowledge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Actualmente el sector privado posee un papel relevante en la provisión y gestión de infraestructuras de transporte en los países de ingreso medio‐bajo, principalmente a través de los proyectos de participación público‐privada (PPPs). Muchos países han impulsado este tipo de proyectos con el fin de hacer frente a la gran demanda de infraestructuras de transporte existente, debido a la escasez de recursos públicos y a la falta de eficiencia en la provisión de los servicios públicos. Como resultado, las PPPs han experimentado un crecimiento importante en las últimas dos décadas a nivel mundial. A pesar de esta tendencia creciente, muchos países no han sido capaces de atraer la participación del sector privado para la provisión de sus infraestructuras o no han logrado el nivel de participación privada que habrían requerido para alcanzar sus objetivos. Según numerosos autores, el desarrollo y el éxito de los proyectos PPP de infraestructuras de transporte de cualquier país está condicionado por una diversidad de factores, siendo uno de ellos la calidad de su entorno institucional. La presente tesis tiene como objetivo principal analizar la influencia del entorno institucional en el volumen de inversión en proyectos de participación público‐privada de infraestructuras de transporte en los países de ingreso medio‐bajo. Para acometer dicho objetivo se ha realizado un análisis empírico de 81 países distribuidos en seis regiones del mundo, durante el periodo 1996‐2013. En el análisis se han desarrollado dos modelos empíricos aplicando principalmente dos metodologías: el contraste de hipótesis y los modelos de datos de panel Tobit. El desarrollo de estos modelos ha permitido analizar de una forma exhaustiva el tema de estudio. Los resultados obtenidos aportan evidencia de que la calidad del entorno institucional posee una influencia significativa en el volumen de inversión en los proyectos PPP de transporte. En general, en esta tesis se muestran evidencias empíricas de que el sector privado ha tendido a invertir en mayor medida en países con entornos institucionales fuertes, es decir, en aquellos países en los que ha existido un mayor nivel de Estado de derecho, estabilidad política y regulatoria, efectividad del gobierno, así como un mayor control de la corrupción. Además, aquellos países donde se ha registrado una mejora en el nivel de su calidad institucional también han experimentado un incremento en el volumen de inversión en PPP de transporte. The private sector has an important role in the provision and management of transport infrastructure in countries of medium‐low income, primarily through projects of public‐private partnerships (PPPs). Many countries have developed PPP projects to meet the high demand of transport infrastructure, due to the scarcity of public resources and the lack of efficiency in the provision of public services. As a result, PPPs have experienced a significant growth, worldwide, in the past two decades. Despite this growing trend, many countries have not been able to attract private sector participation in the provision of infrastructure or have not accomplished the level of private participation that would have required to achieve its objectives. According to various authors, the development of PPP projects for transport infrastructure is determined by a number of factors, one of them being the quality of the institutional environment. The main objective of this dissertation is to analyze the influence of the institutional environment on the volume of investment, in projects of public‐private partnerships for transport infrastructure in countries of medium‐low income. In order to meet this objective, we conducted an empirical analysis of 81 countries, in six regions of the world, during the period of 1996‐2013. The analysis used two empirical models, implementing different methodologies and various statistical techniques: hypothesis testing, and Tobit model using panel data. The development of these models allowed to carry out a more comprehensive analysis. The results show that the quality of the institutional environment has a significant influence on the volume of investment in PPP projects of transport. Overall, this dissertation shows that the private sector tends to invest more in countries with stronger institutional environments, i.e. countries where there has been a higher level of Rule of Law, political and regulatory stability, and an effective control of corruption. In addition, those that have improved the level of institutional quality have also experienced an increase in the volume of investment in PPP of transport.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Evolutionary trees are often estimated from DNA or RNA sequence data. How much confidence should we have in the estimated trees? In 1985, Felsenstein [Felsenstein, J. (1985) Evolution 39, 783–791] suggested the use of the bootstrap to answer this question. Felsenstein’s method, which in concept is a straightforward application of the bootstrap, is widely used, but has been criticized as biased in the genetics literature. This paper concerns the use of the bootstrap in the tree problem. We show that Felsenstein’s method is not biased, but that it can be corrected to better agree with standard ideas of confidence levels and hypothesis testing. These corrections can be made by using the more elaborate bootstrap method presented here, at the expense of considerably more computation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Site-directed mutagenesis and combinatorial libraries are powerful tools for providing information about the relationship between protein sequence and structure. Here we report two extensions that expand the utility of combinatorial mutagenesis for the quantitative assessment of hypotheses about the determinants of protein structure. First, we show that resin-splitting technology, which allows the construction of arbitrarily complex libraries of degenerate oligonucleotides, can be used to construct more complex protein libraries for hypothesis testing than can be constructed from oligonucleotides limited to degenerate codons. Second, using eglin c as a model protein, we show that regression analysis of activity scores from library data can be used to assess the relative contributions to the specific activity of the amino acids that were varied in the library. The regression parameters derived from the analysis of a 455-member sample from a library wherein four solvent-exposed sites in an α-helix can contain any of nine different amino acids are highly correlated (P < 0.0001, R2 = 0.97) to the relative helix propensities for those amino acids, as estimated by a variety of biophysical and computational techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Evolutionary trees are often estimated from DNA or RNA sequence data. How much confidence should we have in the estimated trees? In 1985, Felsenstein [Felsenstein, J. (1985) Evolution 39, 783-791] suggested the use of the bootstrap to answer this question. Felsenstein's method, which in concept is a straightforward application of the bootstrap, is widely used, but has been criticized as biased in the genetics literature. This paper concerns the use of the bootstrap in the tree problem. We show that Felsenstein's method is not biased, but that it can be corrected to better agree with standard ideas of confidence levels and hypothesis testing. These corrections can be made by using the more elaborate bootstrap method presented here, at the expense of considerably more computation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The controversy over the interpretation of DNA profile evidence in forensic identification can be attributed in part to confusion over the mode(s) of statistical inference appropriate to this setting. Although there has been substantial discussion in the literature of, for example, the role of population genetics issues, few authors have made explicit the inferential framework which underpins their arguments. This lack of clarity has led both to unnecessary debates over ill-posed or inappropriate questions and to the neglect of some issues which can have important consequences. We argue that the mode of statistical inference which seems to underlie the arguments of some authors, based on a hypothesis testing framework, is not appropriate for forensic identification. We propose instead a logically coherent framework in which, for example, the roles both of the population genetics issues and of the nonscientific evidence in a case are incorporated. Our analysis highlights several widely held misconceptions in the DNA profiling debate. For example, the profile frequency is not directly relevant to forensic inference. Further, very small match probabilities may in some settings be consistent with acquittal. Although DNA evidence is typically very strong, our analysis of the coherent approach highlights situations which can arise in practice where alternative methods for assessing DNA evidence may be misleading.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A evasão estudantil afeta as universidades, privadas e públicas, no Brasil, trazendo-lhes prejuízos financeiros proporcionais à incidência, respectivamente, de 12% e de 26% no âmbito nacional e de 23% na Universidade de São Paulo (USP), razão pela qual se deve compreender as variáveis que governam o comportamento. Neste contexto, a pesquisa apresenta os prejuízos causados pela evasão e a importância de pesquisá-la na Escola Politécnica da USP (EPUSP): seção 1, desenvolve revisão bibliográfica sobre as causas da evasão (seção 2) e propõe métodos para obter as taxas de evasão a partir dos bancos de dados do Governo Federal e da USP (seção 3). Os resultados estão na seção 4. Para inferir sobre as causas da evasão na EPUSP, analisaram-se bancos de dados que, descritos e tratados na seção 5.1, contêm informações (P. Ex.: tipo de ingresso e egresso, tempo de permanência e histórico escolar) de 16.664 alunos ingressantes entre 1.970 e 2.000, bem como se propuseram modelos estatísticos e se detalharam os conceitos dos testes de hipóteses 2 e t-student (seção 5.2) utilizados na pesquisa. As estatísticas descritivas mostram que a EPUSP sofre 15% de evasão (com maior incidência no 2º ano: 24,65%), que os evadidos permanecem matriculados por 3,8 anos, que a probabilidade de evadir cresce após 6º ano e que as álgebras e os cálculos são disciplinas reprovadoras no 1º ano (seção 5.3). As estatísticas inferenciais demonstraram relação entre a evasão - modo de ingresso na EPUSP e evasão - reprovação nas disciplinas do 1º ano da EPUSP, resultados que, combinados com as estatísticas descritivas, permitiram apontar o déficit vocacional, a falta de persistência, a falta de ambientação à EPUSP e as deficiências na formação predecessora como variáveis responsáveis pela evasão (seção 5.4).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A recent development of the Markov chain Monte Carlo (MCMC) technique is the emergence of MCMC samplers that allow transitions between different models. Such samplers make possible a range of computational tasks involving models, including model selection, model evaluation, model averaging and hypothesis testing. An example of this type of sampler is the reversible jump MCMC sampler, which is a generalization of the Metropolis-Hastings algorithm. Here, we present a new MCMC sampler of this type. The new sampler is a generalization of the Gibbs sampler, but somewhat surprisingly, it also turns out to encompass as particular cases all of the well-known MCMC samplers, including those of Metropolis, Barker, and Hastings. Moreover, the new sampler generalizes the reversible jump MCMC. It therefore appears to be a very general framework for MCMC sampling. This paper describes the new sampler and illustrates its use in three applications in Computational Biology, specifically determination of consensus sequences, phylogenetic inference and delineation of isochores via multiple change-point analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Research into consumer responses to event sponsorships has grown in recent years. However, the effects of consumer knowledge on sponsorship response have received little consideration. Consumers' event knowledge is examined to determine whether experts and novices differ in information processing of sponsorships and whether a sponsor's brand equity influences perceptions of sponsor-event fit. Six sponsors (three high equity/three low equity) were paired with six events. Results of hypothesis testing indicate that experts generate more total thoughts about a sponsor-event combination. Experts and novices do not differ in sponsor-event congruence for high-brand-equity sponsors, but event experts perceive less of a match between sponsor and event for low-brand-equity sponsors. (C) 2004 Wiley Periodicals, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.