188 resultados para Percolation probability
Resumo:
Biologic valve re-replacement was examined in a series of 1343 patients who underwent aortic valve replacement at The Prince Charles Hospital, Brisbane, with a cryopreserved or 4 degrees C stored allograft valve or a xenograft valve, A parametric model approach was used to simultaneously model the competing risks of death without re-replacement and re-replacement before death, One hundred eleven patients underwent a first re-replacement for a variety of reasons (69 patients with xenograft valves, 28 patients with 4 degrees C stored allograft valves, and 14 patients with cryopreserved allograft valves), By multivariable analysis younger age at operation was associated with xenograft, 4 degrees C stored allograft, and cryopreserved allograft valve re-replacement, However, this effect was examined in the context of longer survival of younger patients, which increases their exposure to the risk of re-replacement as compared with that in older patients whose decreased survival reduced their probability of requiring valve re-replacement, In patients older than 60 years at the time of aortic valve replacement, the probability of re-replacement (for any reason) before death was similar for xenografts and cryopreserved allograft valves but higher for 4 degrees C stored valves, However, in patients younger than 60 years, the probability of re-replacement at any time during the remainder of the life of the patient was lower with the cryopreserved allograft valve compared with the xenograft valve and 4 degrees C stored allografts.
Resumo:
A G-design of order n is a pair (P,B) where P is the vertex set of the complete graph K-n and B is an edge-disjoint decomposition of K-n into copies of the simple graph G. Following design terminology, we call these copies ''blocks''. Here K-4 - e denotes the complete graph K-4 with one edge removed. It is well-known that a K-4 - e design of order n exists if and only if n = 0 or 1 (mod 5), n greater than or equal to 6. The intersection problem here asks for which k is it possible to find two K-4 - e designs (P,B-1) and (P,B-2) of order n, with \B-1 boolean AND B-2\ = k, that is, with precisely k common blocks. Here we completely solve this intersection problem for K-4 - e designs.
Resumo:
The classification rules of linear discriminant analysis are defined by the true mean vectors and the common covariance matrix of the populations from which the data come. Because these true parameters are generally unknown, they are commonly estimated by the sample mean vector and covariance matrix of the data in a training sample randomly drawn from each population. However, these sample statistics are notoriously susceptible to contamination by outliers, a problem compounded by the fact that the outliers may be invisible to conventional diagnostics. High-breakdown estimation is a procedure designed to remove this cause for concern by producing estimates that are immune to serious distortion by a minority of outliers, regardless of their severity. In this article we motivate and develop a high-breakdown criterion for linear discriminant analysis and give an algorithm for its implementation. The procedure is intended to supplement rather than replace the usual sample-moment methodology of discriminant analysis either by providing indications that the dataset is not seriously affected by outliers (supporting the usual analysis) or by identifying apparently aberrant points and giving resistant estimators that are not affected by them.
Resumo:
In this paper use consider the problem of providing standard errors of the component means in normal mixture models fitted to univariate or multivariate data by maximum likelihood via the EM algorithm. Two methods of estimation of the standard errors are considered: the standard information-based method and the computationally-intensive bootstrap method. They are compared empirically by their application to three real data sets and by a small-scale Monte Carlo experiment.
Resumo:
In a recent paper [16], one of us identified all of the quasi-stationary distributions for a non-explosive, evanescent birth-death process for which absorption is certain, and established conditions for the existence of the corresponding limiting conditional distributions. Our purpose is to extend these results in a number of directions. We shall consider separately two cases depending on whether or not the process is evanescent. In the former case we shall relax the condition that absorption is certain. Furthermore, we shall allow for the possibility that the minimal process might be explosive, so that the transition rates alone will not necessarily determine the birth-death process uniquely. Although we shall be concerned mainly with the minimal process, our most general results hold for any birth-death process whose transition probabilities satisfy both the backward and the forward Kolmogorov differential equations.
Resumo:
We prove two asymptotical estimates for minimizers of a Ginzburg-Landau functional of the form integral(Omega) [1/2 \del u\(2) + 1/4 epsilon(2) (1 - \u\(2))(2) W (x)] dx.
Resumo:
Izenman and Sommer (1988) used a non-parametric Kernel density estimation technique to fit a seven-component model to the paper thickness of the 1872 Hidalgo stamp issue of Mexico. They observed an apparent conflict when fitting a normal mixture model with three components with unequal variances. This conflict is examined further by investigating the most appropriate number of components when fitting a normal mixture of components with equal variances.
Resumo:
The small sample performance of Granger causality tests under different model dimensions, degree of cointegration, direction of causality, and system stability are presented. Two tests based on maximum likelihood estimation of error-correction models (LR and WALD) are compared to a Wald test based on multivariate least squares estimation of a modified VAR (MWALD). In large samples all test statistics perform well in terms of size and power. For smaller samples, the LR and WALD tests perform better than the MWALD test. Overall, the LR test outperforms the other two in terms of size and power in small samples.
Forecasting regional crop production using SOI phases: an example for the Australian peanut industry
Resumo:
Using peanuts as an example, a generic methodology is presented to forward-estimate regional crop production and associated climatic risks based on phases of the Southern Oscillation Index (SOI). Yield fluctuations caused by a highly variable rainfall environment are of concern to peanut processing and marketing bodies. The industry could profitably use forecasts of likely production to adjust their operations strategically. Significant, physically based lag-relationships exist between an index of ocean/atmosphere El Nino/Southern Oscillation phenomenon and future rainfall in Australia and elsewhere. Combining knowledge of SOI phases in November and December with output from a dynamic simulation model allows the derivation of yield probability distributions based on historic rainfall data. This information is available shortly after planting a crop and at least 3-5 months prior to harvest. The study shows that in years when the November-December SOI phase is positive there is an 80% chance of exceeding average district yields. Conversely, in years when the November-December SOI phase is either negative or rapidly falling there is only a 5% chance of exceeding average district yields, but a 95% chance of below average yields. This information allows the industry to adjust strategically for the expected volume of production. The study shows that simulation models can enhance SOI signals contained in rainfall distributions by discriminating between useful and damaging rainfall events. The methodology can be applied to other industries and regions.
Resumo:
In this paper we suggest a model of sequential auctions with endogenous participation where each bidder conjectures about the number of participants at each round. Then, after learning his value, each bidder decides whether or not to participate in the auction. In the calculation of his expected value, each bidder uses his conjectures about the number of participants for each possible subgroup. In equilibrium, the conjectured probability is compatible with the probability of staying in the auction. In our model, players face participation costs, bidders may buy as many objects as they wish and they are allowed to drop out at any round. Bidders can drop out at any time, but they cannot come back to the auction. In particular we can determine the number of participants and expected prices in equilibrium. We show that for any bidding strategy, there exists such a probability of staying in the auction. For the case of stochastically independent objects, we show that in equilibrium every bidder who decides to continue submits a bid that is equal to his value at each round. When objects are stochastically identical, we are able to show that expected prices are decreasing.
Resumo:
This article examines the efficiency of the National Football League (NFL) betting market. The standard ordinary least squares (OLS) regression methodology is replaced by a probit model. This circumvents potential econometric problems, and allows us to implement more sophisticated betting strategies where bets are placed only when there is a relatively high probability of success. In-sample tests indicate that probit-based betting strategies generate statistically significant profits. Whereas the profitability of a number of these betting strategies is confirmed by out-of-sample testing, there is some inconsistency among the remaining out-of-sample predictions. Our results also suggest that widely documented inefficiencies in this market tend to dissipate over time.
Resumo:
Analysis of a major multi-site epidemiologic study of heart disease has required estimation of the pairwise correlation of several measurements across sub-populations. Because the measurements from each sub-population were subject to sampling variability, the Pearson product moment estimator of these correlations produces biased estimates. This paper proposes a model that takes into account within and between sub-population variation, provides algorithms for obtaining maximum likelihood estimates of these correlations and discusses several approaches for obtaining interval estimates. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
The identification of genes responsible for the rare cases of familial leukemia may afford insight into the mechanism underlying the more common sporadic occurrences. Here we test a single family with 11 relevant meioses transmitting autosomal dominant acute myelogenous leukemia (AML) and myelodysplasia for linkage to three potential candidate loci. In a different family with inherited AML, linkage to chromosome 21q22.1-22.2 was recently reported; we exclude linkage to 21q22.1-22.2, demonstrating that familial AML is a heterogeneous disease. After reviewing familial leukemia and observing anticipation in the form of a declining age of onset with each generation, we had proposed 9p21-22 and 16q22 as additional candidate loci. Whereas linkage to 9p21-22 can be excluded, the finding of a maximum two-point LOD score of 2.82 with the microsatellite marker D16S522 at a recombination fraction theta = 0 provides evidence supporting linkage to 16q22. Haplotype analysis reveals a 23.5-cM (17.9-Mb) commonly inherited region among all affected family members extending from D16S451 to D1GS289, In order to extract maximum linkage information with missing individuals, incomplete informativeness with individual markers in this interval, and possible deviance from strict autosomal dominant inheritance, we performed nonparametric linkage analysis (NPL) and found a maximum NPL statistic corresponding to a P-value of .00098, close to the maximum conditional probability of linkage expected for a pedigree with this structure. Mutational analysis in this region specifically excludes expansion of the AT-rich minisatellite repeat FRA16B fragile site and the CAG trinucleotide repeat in the E2F-4 transcription factor. The ''repeat expansion detection'' method, capable of detecting dynamic mutation associated with anticipation, more generally excludes large CAG repeat expansion as a cause of leukemia in this family.
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
A 4-wheel is a simple graph on 5 vertices with 8 edges, formed by taking a 4-cycle and joining a fifth vertex (the centre of the 4-wheel) to each of the other four vertices. A lambda -fold 4-wheel system of order n is an edge-disjoint decomposition of the complete multigraph lambdaK(n) into 4-wheels. Here, with five isolated possible exceptions when lambda = 2, we give necessary and sufficient conditions for a lambda -fold 4-wheel system of order n to be transformed into a lambda -fold Ccyde system of order n by removing the centre vertex from each 4-wheel, and its four adjacent edges (retaining the 4-cycle wheel rim), and reassembling these edges adjacent to wheel centres into 4-cycles.