12 resultados para Empirical Bayes method

em University of Queensland eSpace - Australia


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Motivation: An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. Results: By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local false discovery rate is provided for each gene, and it can be implemented so that the implied global false discovery rate is bounded as with the Benjamini-Hochberg methodology based on tail areas. The latter procedure is too conservative, unless it is modified according to the prior probability that a gene is not differentially expressed. An attractive feature of the mixture model approach is that it provides a framework for the estimation of this probability and its subsequent use in forming a decision rule. The rule can also be formed to take the false negative rate into account.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research extends the consumer-based brand equity measurement approach to the measurement of the equity associated with retailers. This paper also addresses some of the limitations associated with current retailer equity measurement such as a lack of clarity regarding its nature and dimensionality. We conceptualise retailer equity as a four-dimensional construct comprising retailer awareness, retailer associations, perceived retailer quality, and retailer loyalty. The paper reports the result of an empirical study of a convenience sample of 601 shopping mall consumers at an Australian state capital city. Following a confirmatory factor analysis using structural equation modelling to examine the dimensionality of the retailer equity construct, the proposed model is tested for two retailer categories: department stores and speciality stores. Results confirm the hypothesised four-dimensional structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water-sampler equilibrium partitioning coefficients and aqueous boundary layer mass transfer coefficients for atrazine, diuron, hexazionone and fluometuron onto C18 and SDB-RPS Empore disk-based aquatic passive samplers have been determined experimentally under a laminar flow regime (Re = 5400). The method involved accelerating the time to equilibrium of the samplers by exposing them to three water concentrations, decreasing stepwise to 50% and then 25% of the original concentration. Assuming first-order Fickian kinetics across a rate-limiting aqueous boundary layer, both parameters are determined computationally by unconstrained nonlinear optimization. In addition, a method of estimation of mass transfer coefficients-therefore sampling rates-using the dimensionless Sherwood correlation developed for laminar flow over a flat plate is applied. For each of the herbicides, this correlation is validated to within 40% of the experimental data. The study demonstrates that for trace concentrations (sub 0.1 mu g/L) and these flow conditions, a naked Empore disk performs well as an integrative sampler over short deployments (up to 7 days) for the range of polar herbicides investigated. The SDB-RPS disk allows a longer integrative period than the C18 disk due to its higher sorbent mass and/or its more polar sorbent chemistry. This work also suggests that for certain passive sampler designs, empirical estimation of sampling rates may be possible using correlations that have been available in the chemical engineering literature for some time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To determine the role of the National Mental Health Strategy in the deinstitutionalization of patients in psychiatric hospitals in Queensland. Method: Regression analysis (using the maximum likelihood method) has been applied to relevant time-series datasets on public psychiatric institutions in Queensland. In particular, data on both patients and admissions per 10 000 population are analysed in detail from 1953-54 to the present, although data are presented from 1883-84. Results: These Queensland data indicate that deinstitutionalization was a continuing process from the 1950s to the present. However, it is clear that the experience varied from period to period. For example, the fastest change (in both patients and admissions) took place in the period 1953-54 to 1973-74, followed by the period 1974-75 to 1984-85. Conclusions: In large part, the two policies associated with deinstitutionalization, namely a discharge policy ('opening the back door') and an admission policy ('closing the front door') had been implemented before the advent of the National Mental Health Strategy in January 1993. Deinstitutionalization was most rapid in the 30-year period to the early 1980s: the process continued in the 1990s, but at a much slower rate. Deinstitutionalization was, in large part, over before the Strategy was developed and implemented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The measurement of lifetime prevalence of depression in cross-sectional surveys is biased by recall problems. We estimated it indirectly for two countries using modelling, and quantified the underestimation in the empirical estimate for one. A microsimulation model was used to generate population-based epidemiological measures of depression. We fitted the model to 1-and 12-month prevalence data from the Netherlands Mental Health Survey and Incidence Study (NEMESIS) and the Australian Adult Mental Health and Wellbeing Survey. The lowest proportion of cases ever having an episode in their life is 30% of men and 40% of women, for both countries. This corresponds to a lifetime prevalence of 20 and 30%, respectively, in a cross-sectional setting (aged 15-65). The NEMESIS data were 38% lower than these estimates. We conclude that modelling enabled us to estimate lifetime prevalence of depression indirectly. This method is useful in the absence of direct measurement, but also showed that direct estimates are underestimated by recall bias and by the cross-sectional setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical method is introduced to determine the nuclear magnetic resonance frequency of a donor (P-31) doped inside a silicon substrate under the influence of an applied electric field. This phosphorus donor has been suggested for operation as a qubit for the realization of a solid-state scalable quantum computer. The operation of the qubit is achieved by a combination of the rotation of the phosphorus nuclear spin through a globally applied magnetic field and the selection of the phosphorus nucleus through a locally applied electric field. To realize the selection function, it is required to know the relationship between the applied electric field and the change of the nuclear magnetic resonance frequency of phosphorus. In this study, based on the wave functions obtained by the effective-mass theory, we introduce an empirical correction factor to the wave functions at the donor nucleus. Using the corrected wave functions, we formulate a first-order perturbation theory for the perturbed system under the influence of an electric field. In order to calculate the potential distributions inside the silicon and the silicon dioxide layers due to the applied electric field, we use the multilayered Green's functions and solve an integral equation by the moment method. This enables us to consider more realistic, arbitrary shape, and three-dimensional qubit structures. With the calculation of the potential distributions, we have investigated the effects of the thicknesses of silicon and silicon dioxide layers, the relative position of the donor, and the applied electric field on the nuclear magnetic resonance frequency of the donor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Univariate linkage analysis is used routinely to localise genes for human complex traits. Often, many traits are analysed but the significance of linkage for each trait is not corrected for multiple trait testing, which increases the experiment-wise type-I error rate. In addition, univariate analyses do not realise the full power provided by multivariate data sets. Multivariate linkage is the ideal solution but it is computationally intensive, so genome-wide analysis and evaluation of empirical significance are often prohibitive. We describe two simple methods that efficiently alleviate these caveats by combining P-values from multiple univariate linkage analyses. The first method estimates empirical pointwise and genome-wide significance between one trait and one marker when multiple traits have been tested. It is as robust as an appropriate Bonferroni adjustment, with the advantage that no assumptions are required about the number of independent tests performed. The second method estimates the significance of linkage between multiple traits and one marker and, therefore, it can be used to localise regions that harbour pleiotropic quantitative trait loci (QTL). We show that this method has greater power than individual univariate analyses to detect a pleiotropic QTL across different situations. In addition, when traits are moderately correlated and the QTL influences all traits, it can outperform formal multivariate VC analysis. This approach is computationally feasible for any number of traits and was not affected by the residual correlation between traits. We illustrate the utility of our approach with a genome scan of three asthma traits measured in families with a twin proband.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In empirical studies of Evolutionary Algorithms, it is usually desirable to evaluate and compare algorithms using as many different parameter settings and test problems as possible, in border to have a clear and detailed picture of their performance. Unfortunately, the total number of experiments required may be very large, which often makes such research work computationally prohibitive. In this paper, the application of a statistical method called racing is proposed as a general-purpose tool to reduce the computational requirements of large-scale experimental studies in evolutionary algorithms. Experimental results are presented that show that racing typically requires only a small fraction of the cost of an exhaustive experimental study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Tree Augmented Naïve Bayes (TAN) classifier relaxes the sweeping independence assumptions of the Naïve Bayes approach by taking account of conditional probabilities. It does this in a limited sense, by incorporating the conditional probability of each attribute given the class and (at most) one other attribute. The method of boosting has previously proven very effective in improving the performance of Naïve Bayes classifiers and in this paper, we investigate its effectiveness on application to the TAN classifier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Racing algorithms have recently been proposed as a general-purpose method for performing model selection in machine teaming algorithms. In this paper, we present an empirical study of the Hoeffding racing algorithm for selecting the k parameter in a simple k-nearest neighbor classifier. Fifteen widely-used classification datasets from UCI are used and experiments conducted across different confidence levels for racing. The results reveal a significant amount of sensitivity of the k-nn classifier to its model parameter value. The Hoeffding racing algorithm also varies widely in its performance, in terms of the computational savings gained over an exhaustive evaluation. While in some cases the savings gained are quite small, the racing algorithm proved to be highly robust to the possibility of erroneously eliminating the optimal models. All results were strongly dependent on the datasets used.