53 resultados para Prior Probability
em University of Queensland eSpace - Australia
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
In diagnosis and prognosis, we should avoid intuitive “guesstimates” and seek a validated numerical aid
Resumo:
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.
Resumo:
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local false discovery rate is provided for each gene, and it can be implemented so that the implied global false discovery rate is bounded as with the Benjamini-Hochberg methodology based on tail areas. The latter procedure is too conservative, unless it is modified according to the prior probability that a gene is not differentially expressed. An attractive feature of the mixture model approach is that it provides a framework for the estimation of this probability and its subsequent use in forming a decision rule. The rule can also be formed to take the false negative rate into account.
Resumo:
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.
Resumo:
There have been many models developed by scientists to assist decision-makers in making socio-economic and environmental decisions. It is now recognised that there is a shift in the dominant paradigm to making decisions with stakeholders, rather than making decisions for stakeholders. Our paper investigates two case studies where group model building has been undertaken for maintaining biodiversity in Australia. The first case study focuses on preservation and management of green spaces and biodiversity in metropolitan Melbourne under the umbrella of the Melbourne 2030 planning strategy. A geographical information system is used to collate a number of spatial datasets encompassing a range of cultural and natural assets data layers including: existing open spaces, waterways, threatened fauna and flora, ecological vegetation covers, registered cultural heritage sites, and existing land parcel zoning. Group model building is incorporated into the study through eliciting weightings and ratings of importance for each datasets from urban planners to formulate different urban green system scenarios. The second case study focuses on modelling ecoregions from spatial datasets for the state of Queensland. The modelling combines collaborative expert knowledge and a vast amount of environmental data to build biogeographical classifications of regions. An information elicitation process is used to capture expert knowledge of ecoregions as geographical descriptions, and to transform this into prior probability distributions that characterise regions in terms of environmental variables. This prior information is combined with measured data on the environmental variables within a Bayesian modelling technique to produce the final classified regions. We describe how linked views between descriptive information, mapping and statistical plots are used to decide upon representative regions that satisfy a number of criteria for biodiversity and conservation. This paper discusses the advantages and problems encountered when undertaking group model building. Future research will extend the group model building approach to include interested individuals and community groups.
Resumo:
Two experiments were conducted on the nature of expert perception in the sport of squash. In the first experiment, ten expert and fifteen novice players attempted to predict the direction and force of squash strokes from either a film display (occluded at variable time periods before and after the opposing player had struck the ball) or a matched point-light display (containing only the basic kinematic features of the opponent's movement pattern). Experts outperformed the novices under both display conditions, and the same basic time windows that characterised expert and novice pick-up of information in the film task also persisted in the point-light task. This suggests that the experts' perceptual advantage is directly related to their superior pick-up of essential kinematic information. In the second experiment, the vision of six expert and six less skilled players was occluded by remotely triggered liquid-crystal spectacles at quasi-random intervals during simulated match play. Players were required to complete their current stroke even when the display was occluded and their prediction performance was assessed with respect to whether they moved to the correct half of the court to match the direction and depth of the opponent's stroke. Consistent with experiment 1, experts were found to be superior in their advance pick-up of both directional and depth information when the display was occluded during the opponent's hitting action. However, experts also remained better than chance, and clearly superior to less skilled players, in their prediction performance under conditions where occlusion occurred before any significant pre-contact preparatory movement by the opposing player was visible. This additional source of expert superiority is attributable to their superior attunement to the information contained in the situational probabilities and sequential dependences within their opponent's pattern of play.
Resumo:
The phenomenon of probability backflow, previously quantified for a free nonrelativistic particle, is considered for a free particle obeying Dirac's equation. It is known that probability backflow can occur in the opposite direction to the momentum; that is to say, there exist positive-energy states in which the particle certainly has a positive momentum in a given direction, but for which the component of the probability flux vector in that direction is negative. It is shown thar the maximum possible amount of probability that can flow backwards, over a given time interval of duration T, depends on the dimensionless parameter epsilon = root 4h/mc(2)T, where m is the mass of the particle and c is the speed of light. At epsilon = 0, the nonrelativistic value of approximately 0.039 for this maximum is recovered. Numerical studies suggest that the maximum decreases monotonically as epsilon increases from 0, and show that it depends on the size of m, h, and T, unlike the nonrelativistic case.
Resumo:
In his study of the 'time of arrival' problem in the nonrelativistic quantum mechanics of a single particle, Allcock [1] noted that the direction of the probability flux vector is not necessarily the same as that of the mean momentum of a wave packet, even when the packet is composed entirely of plane waves with a common direction of momentum. Packets can be constructed, for example for a particle moving under a constant force, in which probability flows for a finite time in the opposite direction to the momentum. A similar phenomenon occurs for the Dirac electron. The maximum amount of probabilitiy backflow which can occur over a given time interval can be calculated in each case.
Resumo:
The aim of this study was to investigate the frequency of axillary metastasis in women with tubular carcinoma (TC) of the breast. Women who underwent axillary dissection for TC in the Western Sydney area (1984-1995) were identified retrospectively through a search of computerized records. A centralized pathology review was performed and tumours were classified as pure tubular (22) or mixed tubular (nine), on the basis of the invasive component containing 90 per cent or more, or 75-90 per cent tubule formation respectively. A Medline search of the literature was undertaken to compile a collective series (20 studies with a total of 680 patients) to address the frequency of nodal involvement in TC. A quantitative meta-analysis was used to combine the results of these studies. The overall frequency of nodal metastasis was five of 31 (16 per cent); one of 22 pure tubular and four of nine mixed tumours (P = 0.019). None of the tumours with a diameter of 10 mm or less (n = 16) had nodal metastasis compared with five of 15 larger tumours (P = 0.018). The meta-analysis of 680 women showed an overall frequency of nodal metastasis in TC of 13.8 (95 per cent confidence interval 9.3-18.3) per cent. The frequency of nodal involvement was 6.6 (1.7-11.4) per cent in pure TC (n = 244) and 25.0 (12.5-37.6) per cent in mixed TC (n = 149). A case may be made for observing the clinically negative axilla in women with a small TC (10 mm or less in diameter).
Resumo:
There is concern over the safety of calcium channel blockers (CCBs) in acute coronary disease. We sought to determine if patients taking calcium channel blockers (CCBs) at the time of admission with acute myocardial infarction (AMI) had a higher case-fatality compared with those taking beta-blockers or neither medication. Clinical and drug treatment variables at the time of hospital admission predictive of survival at 28 days were examined in a community-based registry of patients aged under 65 years admitted to hospital for suspected AMI in Perth, Australia, between 1984 and 1993. Among 7766 patients, 1291 (16.6%) were taking a CCB and 1259 (16.2%) a betablocker alone at hospital admission. Patients taking CCBs had a worse clinical profile than those taking a beta-blocker alone or neither drug (control group), and a higher unadjusted 28-day mortality (17.6% versus 9.3% and 11.1% respectively, both P < 0.001). There was no significant heterogeneity with respect to mortality between nifedipine, diltiazem, or verapamil when used alone, or with a beta-blocker. After adjustment for factors predictive of death at 28 days, patients taking a CCB were found not to have an excess chance of death compared with the control group (odds ratio [OR] 1.06, 95% confidence interval [CI]; 0.87, 1.30), whereas those taking a beta-blocker alone had a lower odds of death (OR 0.75, 95% CI; 0.59, 0.94). These results indicate that established calcium channel blockade is not associated with an excess risk of death following AMI once other differences between patients are taken into account, but neither does it have the survival advantage seen with prior beta-blocker therapy.
Resumo:
Background and aim of the study: Results of valve re-replacement (reoperation) in 898 patients undergoing aortic valve replacement with cryopreserved homograft valves between 1975 and 1998 are reported. The study aim was to provide estimates of unconditional probability of valve reoperation and cumulative incidence function (actual risk) of reoperation. Methods: Valves were implanted by subcoronary insertion (n = 500), inclusion cylinder (n = 46), and aortic root replacement (n = 352). Probability of reoperation was estimated by adopting a mixture model framework within which estimates were adjusted for two risk factors: patient age at initial replacement, and implantation technique. Results: For a patient aged 50 years, the probability of reoperation in his/her lifetime was estimated as 44% and 56% for non-root and root replacement techniques, respectively. For a patient aged 70 years, estimated probability of reoperation was 16% and 25%, respectively. Given that a reoperation is required, patients with non-root replacement have a higher hazard rate than those with root replacement (hazards ratio = 1.4), indicating that non-root replacement patients tend to undergo reoperation earlier before death than root replacement patients. Conclusion: Younger patient age and root versus non-root replacement are risk factors for reoperation. Valve durability is much less in younger patients, while root replacement patients appear more likely to live longer and hence are more likely to require reoperation.
Resumo:
This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
The acceptance-probability-controlled simulated annealing with an adaptive move generation procedure, an optimization technique derived from the simulated annealing algorithm, is presented. The adaptive move generation procedure was compared against the random move generation procedure on seven multiminima test functions, as well as on the synthetic data, resembling the optical constants of a metal. In all cases the algorithm proved to have faster convergence and superior escaping from local minima. This algorithm was then applied to fit the model dielectric function to data for platinum and aluminum.