214 resultados para Multiple Hypothesis Testing
Resumo:
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.
Resumo:
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local false discovery rate is provided for each gene, and it can be implemented so that the implied global false discovery rate is bounded as with the Benjamini-Hochberg methodology based on tail areas. The latter procedure is too conservative, unless it is modified according to the prior probability that a gene is not differentially expressed. An attractive feature of the mixture model approach is that it provides a framework for the estimation of this probability and its subsequent use in forming a decision rule. The rule can also be formed to take the false negative rate into account.
Resumo:
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.
Resumo:
Testing for simultaneous vicariance across comparative phylogeographic data sets is a notoriously difficult problem hindered by mutational variance, the coalescent variance, and variability across pairs of sister taxa in parameters that affect genetic divergence. We simulate vicariance to characterize the behaviour of several commonly used summary statistics across a range of divergence times, and to characterize this behaviour in comparative phylogeographic datasets having multiple taxon-pairs. We found Tajima's D to be relatively uncorrelated with other summary statistics across divergence times, and using simple hypothesis testing of simultaneous vicariance given variable population sizes, we counter-intuitively found that the variance across taxon pairs in Nei and Li's net nucleotide divergence (pi(net)), a common measure of population divergence, is often inferior to using the variance in Tajima's D across taxon pairs as a test statistic to distinguish ancient simultaneous vicariance from variable vicariance histories. The opposite and more intuitive pattern is found for testing more recent simultaneous vicariance, and overall we found that depending on the timing of vicariance, one of these two test statistics can achieve high statistical power for rejecting simultaneous vicariance, given a reasonable number of intron loci (> 5 loci, 400 bp) and a range of conditions. These results suggest that components of these two composite summary statistics should be used in future simulation-based methods which can simultaneously use a pool of summary statistics to test comparative the phylogeographic hypotheses we consider here.
Resumo:
Background: Germline mutations in the CDKN2A gene, which encodes two proteins (p16INK4A and p14ARF), are the most common cause of inherited susceptibility to melanoma. We examined the penetrance of such mutations using data from eight groups from Europe, Australia and the United States that are part of The Melanoma Genetics Consortium Methods: We analyzed 80 families with documented CDKN2A mutations and multiple cases of cutaneous melanoma. We modeled penetrance for melanoma using a logistic regression model incorporating survival analysis. Hypothesis testing was based on likelihood ratio tests. Covariates included gender, alterations in p14APF protein, and population melanoma incidence rates. All statistical tests were two-sided. Results: The 80 analyzed families contained 402 melanoma patients, 320 of whom were tested for mutations and 291 were mutation carriers. We also tested 713 unaffected family members for mutations and 194 were carriers. Overall, CDKN2A mutation penetrance was estimated to be 0.30 (95% confidence interval (CI) = 0.12 to 0.62) by age 50 years and 0.67 (95% CI = 0.31 to 0.96) by age 80 years. Penetrance was not statistically significantly modified by gender or by whether the CDKN2A mutation altered p14ARF protein. However, there was a statistically significant effect of residing in a location with a high population incidence rate of melanoma (P = .003). By age 50 years CDKN2A mutation penetrance reached 0.13 in Europe, 0.50 in the United States, and 0.32 in Australia; by age 80 years it was 0.58 in Europe, 0.76 in the United States, and 0.91 in Australia. Conclusions: This study, which gives the most informed estimates of CDKN2A mutation penetrance available, indicates that the penetrance varies with melanoma population incidence rates. Thus, the same factors that affect population incidence of melanoma may also mediate CDKN2A penetrance.
Resumo:
A recent development of the Markov chain Monte Carlo (MCMC) technique is the emergence of MCMC samplers that allow transitions between different models. Such samplers make possible a range of computational tasks involving models, including model selection, model evaluation, model averaging and hypothesis testing. An example of this type of sampler is the reversible jump MCMC sampler, which is a generalization of the Metropolis-Hastings algorithm. Here, we present a new MCMC sampler of this type. The new sampler is a generalization of the Gibbs sampler, but somewhat surprisingly, it also turns out to encompass as particular cases all of the well-known MCMC samplers, including those of Metropolis, Barker, and Hastings. Moreover, the new sampler generalizes the reversible jump MCMC. It therefore appears to be a very general framework for MCMC sampling. This paper describes the new sampler and illustrates its use in three applications in Computational Biology, specifically determination of consensus sequences, phylogenetic inference and delineation of isochores via multiple change-point analysis.
Resumo:
Two experiments tested predictions from a theory in which processing load depends on relational complexity (RC), the number of variables related in a single decision. Tasks from six domains (transitivity, hierarchical classification, class inclusion, cardinality, relative-clause sentence comprehension, and hypothesis testing) were administered to children aged 3-8 years. Complexity analyses indicated that the domains entailed ternary relations (three variables). Simpler binary-relation (two variables) items were included for each domain. Thus RC was manipulated with other factors tightly controlled. Results indicated that (i) ternary-relation items were more difficult than comparable binary-relation items, (ii) the RC manipulation was sensitive to age-related changes, (iii) ternary relations were processed at a median age of 5 years, (iv) cross-task correlations were positive, with all tasks loading on a single factor (RC), (v) RC factor scores accounted for 80% (88%) of age-related variance in fluid intelligence (compositionality of sets), (vi) binary- and ternary-relation items formed separate complexity classes, and (vii) the RC approach to defining cognitive complexity is applicable to different content domains. (C) 2002 Elsevier Science (USA). All rights reserved.
Resumo:
Research into consumer responses to event sponsorships has grown in recent years. However, the effects of consumer knowledge on sponsorship response have received little consideration. Consumers' event knowledge is examined to determine whether experts and novices differ in information processing of sponsorships and whether a sponsor's brand equity influences perceptions of sponsor-event fit. Six sponsors (three high equity/three low equity) were paired with six events. Results of hypothesis testing indicate that experts generate more total thoughts about a sponsor-event combination. Experts and novices do not differ in sponsor-event congruence for high-brand-equity sponsors, but event experts perceive less of a match between sponsor and event for low-brand-equity sponsors. (C) 2004 Wiley Periodicals, Inc.
Resumo:
Univariate linkage analysis is used routinely to localise genes for human complex traits. Often, many traits are analysed but the significance of linkage for each trait is not corrected for multiple trait testing, which increases the experiment-wise type-I error rate. In addition, univariate analyses do not realise the full power provided by multivariate data sets. Multivariate linkage is the ideal solution but it is computationally intensive, so genome-wide analysis and evaluation of empirical significance are often prohibitive. We describe two simple methods that efficiently alleviate these caveats by combining P-values from multiple univariate linkage analyses. The first method estimates empirical pointwise and genome-wide significance between one trait and one marker when multiple traits have been tested. It is as robust as an appropriate Bonferroni adjustment, with the advantage that no assumptions are required about the number of independent tests performed. The second method estimates the significance of linkage between multiple traits and one marker and, therefore, it can be used to localise regions that harbour pleiotropic quantitative trait loci (QTL). We show that this method has greater power than individual univariate analyses to detect a pleiotropic QTL across different situations. In addition, when traits are moderately correlated and the QTL influences all traits, it can outperform formal multivariate VC analysis. This approach is computationally feasible for any number of traits and was not affected by the residual correlation between traits. We illustrate the utility of our approach with a genome scan of three asthma traits measured in families with a twin proband.
Resumo:
Environmental effects on the concentration of photosynthetic pigments in micro-algae can be explained by dynamics of photosystem synthesis and deactivation. A model that couples photosystem losses to the relative cellular rates of energy harvesting (light absorption) and assimilation predicts optimal concentrations of light-harvesting pigments and balanced energy flow under environmental conditions that affect light availability and metabolic rates. Effects of light intensity, nutrient supply and temperature on growth rate and pigment levels were similar to general patterns observed across diverse micro-algal taxa. Results imply that dynamic behaviour associated with photophysical stress, and independent of gene regulation, might constitute one mechanism for photo-acclimation of photosynthesis.
Neural biopsies from patients with schizophrenia: Testing the neurodevelopmental hypothesis in vitro
Resumo:
Recent empirical studies have found significant evidence of departures from competition in the input side of the Australian bread, breakfast cereal and margarine end-product markets. For example, Griffith (2000) found that firms in some parts of the processing and marketing sector exerted market power when purchasing grains and oilseeds from farmers. As noted at the time, this result accorded well with the views of previous regulatory authorities (p.358). In the mid-1990s, the Prices Surveillence Authority (PSA 1994) determined that the markets for products contained in the Breakfast Cereals and Cooking Oils and Fats indexes were "not effectively competitive" (p.14). The PSA consequently maintained price surveillence on the major firms in this product group. The Griffith result is also consistent with the large number of legal judgements against firms in this sector over the past decade for price fixing or other types of non-competitive behaviour. For example, bread manufacturer George Weston was fined twice during 2000 for non-competitive conduct and the ACCC has also recently pursued and won cases against retailer Safeway in grains and oilseeds product lines.