928 resultados para Binary hypothesis testing


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Evolutionary trees are often estimated from DNA or RNA sequence data. How much confidence should we have in the estimated trees? In 1985, Felsenstein [Felsenstein, J. (1985) Evolution 39, 783–791] suggested the use of the bootstrap to answer this question. Felsenstein’s method, which in concept is a straightforward application of the bootstrap, is widely used, but has been criticized as biased in the genetics literature. This paper concerns the use of the bootstrap in the tree problem. We show that Felsenstein’s method is not biased, but that it can be corrected to better agree with standard ideas of confidence levels and hypothesis testing. These corrections can be made by using the more elaborate bootstrap method presented here, at the expense of considerably more computation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Site-directed mutagenesis and combinatorial libraries are powerful tools for providing information about the relationship between protein sequence and structure. Here we report two extensions that expand the utility of combinatorial mutagenesis for the quantitative assessment of hypotheses about the determinants of protein structure. First, we show that resin-splitting technology, which allows the construction of arbitrarily complex libraries of degenerate oligonucleotides, can be used to construct more complex protein libraries for hypothesis testing than can be constructed from oligonucleotides limited to degenerate codons. Second, using eglin c as a model protein, we show that regression analysis of activity scores from library data can be used to assess the relative contributions to the specific activity of the amino acids that were varied in the library. The regression parameters derived from the analysis of a 455-member sample from a library wherein four solvent-exposed sites in an α-helix can contain any of nine different amino acids are highly correlated (P < 0.0001, R2 = 0.97) to the relative helix propensities for those amino acids, as estimated by a variety of biophysical and computational techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Evolutionary trees are often estimated from DNA or RNA sequence data. How much confidence should we have in the estimated trees? In 1985, Felsenstein [Felsenstein, J. (1985) Evolution 39, 783-791] suggested the use of the bootstrap to answer this question. Felsenstein's method, which in concept is a straightforward application of the bootstrap, is widely used, but has been criticized as biased in the genetics literature. This paper concerns the use of the bootstrap in the tree problem. We show that Felsenstein's method is not biased, but that it can be corrected to better agree with standard ideas of confidence levels and hypothesis testing. These corrections can be made by using the more elaborate bootstrap method presented here, at the expense of considerably more computation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The controversy over the interpretation of DNA profile evidence in forensic identification can be attributed in part to confusion over the mode(s) of statistical inference appropriate to this setting. Although there has been substantial discussion in the literature of, for example, the role of population genetics issues, few authors have made explicit the inferential framework which underpins their arguments. This lack of clarity has led both to unnecessary debates over ill-posed or inappropriate questions and to the neglect of some issues which can have important consequences. We argue that the mode of statistical inference which seems to underlie the arguments of some authors, based on a hypothesis testing framework, is not appropriate for forensic identification. We propose instead a logically coherent framework in which, for example, the roles both of the population genetics issues and of the nonscientific evidence in a case are incorporated. Our analysis highlights several widely held misconceptions in the DNA profiling debate. For example, the profile frequency is not directly relevant to forensic inference. Further, very small match probabilities may in some settings be consistent with acquittal. Although DNA evidence is typically very strong, our analysis of the coherent approach highlights situations which can arise in practice where alternative methods for assessing DNA evidence may be misleading.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A evasão estudantil afeta as universidades, privadas e públicas, no Brasil, trazendo-lhes prejuízos financeiros proporcionais à incidência, respectivamente, de 12% e de 26% no âmbito nacional e de 23% na Universidade de São Paulo (USP), razão pela qual se deve compreender as variáveis que governam o comportamento. Neste contexto, a pesquisa apresenta os prejuízos causados pela evasão e a importância de pesquisá-la na Escola Politécnica da USP (EPUSP): seção 1, desenvolve revisão bibliográfica sobre as causas da evasão (seção 2) e propõe métodos para obter as taxas de evasão a partir dos bancos de dados do Governo Federal e da USP (seção 3). Os resultados estão na seção 4. Para inferir sobre as causas da evasão na EPUSP, analisaram-se bancos de dados que, descritos e tratados na seção 5.1, contêm informações (P. Ex.: tipo de ingresso e egresso, tempo de permanência e histórico escolar) de 16.664 alunos ingressantes entre 1.970 e 2.000, bem como se propuseram modelos estatísticos e se detalharam os conceitos dos testes de hipóteses 2 e t-student (seção 5.2) utilizados na pesquisa. As estatísticas descritivas mostram que a EPUSP sofre 15% de evasão (com maior incidência no 2º ano: 24,65%), que os evadidos permanecem matriculados por 3,8 anos, que a probabilidade de evadir cresce após 6º ano e que as álgebras e os cálculos são disciplinas reprovadoras no 1º ano (seção 5.3). As estatísticas inferenciais demonstraram relação entre a evasão - modo de ingresso na EPUSP e evasão - reprovação nas disciplinas do 1º ano da EPUSP, resultados que, combinados com as estatísticas descritivas, permitiram apontar o déficit vocacional, a falta de persistência, a falta de ambientação à EPUSP e as deficiências na formação predecessora como variáveis responsáveis pela evasão (seção 5.4).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A recent development of the Markov chain Monte Carlo (MCMC) technique is the emergence of MCMC samplers that allow transitions between different models. Such samplers make possible a range of computational tasks involving models, including model selection, model evaluation, model averaging and hypothesis testing. An example of this type of sampler is the reversible jump MCMC sampler, which is a generalization of the Metropolis-Hastings algorithm. Here, we present a new MCMC sampler of this type. The new sampler is a generalization of the Gibbs sampler, but somewhat surprisingly, it also turns out to encompass as particular cases all of the well-known MCMC samplers, including those of Metropolis, Barker, and Hastings. Moreover, the new sampler generalizes the reversible jump MCMC. It therefore appears to be a very general framework for MCMC sampling. This paper describes the new sampler and illustrates its use in three applications in Computational Biology, specifically determination of consensus sequences, phylogenetic inference and delineation of isochores via multiple change-point analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Research into consumer responses to event sponsorships has grown in recent years. However, the effects of consumer knowledge on sponsorship response have received little consideration. Consumers' event knowledge is examined to determine whether experts and novices differ in information processing of sponsorships and whether a sponsor's brand equity influences perceptions of sponsor-event fit. Six sponsors (three high equity/three low equity) were paired with six events. Results of hypothesis testing indicate that experts generate more total thoughts about a sponsor-event combination. Experts and novices do not differ in sponsor-event congruence for high-brand-equity sponsors, but event experts perceive less of a match between sponsor and event for low-brand-equity sponsors. (C) 2004 Wiley Periodicals, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local false discovery rate is provided for each gene, and it can be implemented so that the implied global false discovery rate is bounded as with the Benjamini-Hochberg methodology based on tail areas. The latter procedure is too conservative, unless it is modified according to the prior probability that a gene is not differentially expressed. An attractive feature of the mixture model approach is that it provides a framework for the estimation of this probability and its subsequent use in forming a decision rule. The rule can also be formed to take the false negative rate into account.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objetivo deste trabalho é testar a teoria da Paridade de Poder de Compra em sua versão absoluta e relativa para o Brasil no período de 1995 a 2010, utilizando procedimentos da econometria visando estabelecer através de testes de hipóteses a validação ou rejeição da teoria da Paridade do Poder de Compra. Para a verificação serão utilizados os testes de Dickey-Fuller (DF), Dickey-Fuller Ampliado (ADF) e testes de Cointegração de Engle e Granger e Joahansen. Adotamos para o estudo os países EUA e Brasil, tendo em vista o fluxo de comércio entre estes países e sua importância na economia mundial. Através dos índices de preço IPA e PPI analisarse- á a validação da teoria da Paridade de Poder de Compra em sua versão relativa e absoluta, chegando-se a conclusão de aceitação de sua versão relativa e rejeição de sua versão absoluta.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spectral and coherence methodologies are ubiquitous for the analysis of multiple time series. Partial coherence analysis may be used to try to determine graphical models for brain functional connectivity. The outcome of such an analysis may be considerably influenced by factors such as the degree of spectral smoothing, line and interference removal, matrix inversion stabilization and the suppression of effects caused by side-lobe leakage, the combination of results from different epochs and people, and multiple hypothesis testing. This paper examines each of these steps in turn and provides a possible path which produces relatively ‘clean’ connectivity plots. In particular we show how spectral matrix diagonal up-weighting can simultaneously stabilize spectral matrix inversion and reduce effects caused by side-lobe leakage, and use the stepdown multiple hypothesis test procedure to help formulate an interaction strength.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research sets out to assess if the PHC system in rural Nigeria is effective by testing the research hypothesis: `PHC can be effective if and only if the Health Care Delivery System matches the attitudes and expectations of the Community'. The field surveys to accomplish this task were carried out in IBO, YORUBA, and HAUSA rural communities. A variety of techniques have been used as Research Methodology and these include questionnaires, interviews and personal observations of events in the rural community. This thesis embraces three main parts. Part I traces the socio-cultural aspects of PHC in rural Nigeria, describes PHC management activities in Nigeria and the practical problems inherent in the system. Part II describes various theoretical and practical research techniques used for the study and concentrates on the field work programme, data analysis and the research hypothesis-testing. Part III focusses on general strategies to improve PHC system in Nigeria to make it more effective. The research contributions to knowledge and the summary of main conclusions of the study are highlighted in this part also. Based on testing and exploring the research hypothesis as stated above, some conclusions have been arrived at, which suggested that PHC in rural Nigeria is ineffective as revealed in people's low opinions of the system and dissatisfaction with PHC services. Many people had expressed the view that they could not obtain health care services in time, at a cost they could afford and in a manner acceptable to them. Following the conclusions, some alternative ways to implement PHC programmes in rural Nigeria have been put forward to improve and make the Nigerian PHC system more effective.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Citation information: Armstrong RA, Davies LN, Dunne MCM & Gilmartin B. Statistical guidelines for clinical studies of human vision. Ophthalmic Physiol Opt 2011, 31, 123-136. doi: 10.1111/j.1475-1313.2010.00815.x ABSTRACT: Statistical analysis of data can be complex and different statisticians may disagree as to the correct approach leading to conflict between authors, editors, and reviewers. The objective of this article is to provide some statistical advice for contributors to optometric and ophthalmic journals, to provide advice specifically relevant to clinical studies of human vision, and to recommend statistical analyses that could be used in a variety of circumstances. In submitting an article, in which quantitative data are reported, authors should describe clearly the statistical procedures that they have used and to justify each stage of the analysis. This is especially important if more complex or 'non-standard' analyses have been carried out. The article begins with some general comments relating to data analysis concerning sample size and 'power', hypothesis testing, parametric and non-parametric variables, 'bootstrap methods', one and two-tail testing, and the Bonferroni correction. More specific advice is then given with reference to particular statistical procedures that can be used on a variety of types of data. Where relevant, examples of correct statistical practice are given with reference to recently published articles in the optometric and ophthalmic literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although considerable effort has been invested in the measurement of banking efficiency using Data Envelopment Analysis, hardly any empirical research has focused on comparison of banks in Gulf States Countries This paper employs data on Gulf States banking sector for the period 2000-2002 to develop efficiency scores and rankings for both Islamic and conventional banks. We then investigate the productivity change using Malmquist Index and decompose the productivity into technical change and efficiency change. Further, hypothesis testing and statistical precision in the context of nonparametric efficiency and productivity measurement have been used. Specially, cross-country analysis of efficiency and comparisons of efficiencies between Islamic banks and conventional banks have been investigated using Mann-Whitney test.