188 resultados para Percolation probability
Resumo:
Interval-valued versions of the max-flow min-cut theorem and Karp-Edmonds algorithm are developed and provide robustness estimates for flows in networks in an imprecise or uncertain environment. These results are extended to networks with fuzzy capacities and flows. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The anisotropic norm of a linear discrete-time-invariant system measures system output sensitivity to stationary Gaussian input disturbances of bounded mean anisotropy. Mean anisotropy characterizes the degree of predictability (or colouredness) and spatial non-roundness of the noise. The anisotropic norm falls between the H-2 and H-infinity norms and accommodates their loss of performance when the probability structure of input disturbances is not exactly known. This paper develops a method for numerical computation of the anisotropic norm which involves linked Riccati and Lyapunov equations and an associated special type equation.
Resumo:
We shall examine a model, first studied by Brockwell et al. [Adv Appl Probab 14 (1982) 709.], which can be used to describe the longterm behaviour of populations that are subject to catastrophic mortality or emigration events. Populations can suffer dramatic declines when disease, such as an introduced virus, affects the population, or when food shortages occur, due to overgrazing or fluctuations in rainfall. However, perhaps surprisingly, such populations can survive for long periods and, although they may eventually become extinct, they can exhibit an apparently stationary regime. It is useful to be able to model this behaviour. This is particularly true of the ecological examples that motivated the present study, since, in order to properly manage these populations, it is necessary to be able to predict persistence times and to estimate the conditional probability distribution of population size. We shall see that although our model predicts eventual extinction, the time till extinction can be long and the stationary exhibited by these populations over any reasonable time scale can be explained using a quasistationary distribution. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A mixture model incorporating long-term survivors has been adopted in the field of biostatistics where some individuals may never experience the failure event under study. The surviving fractions may be considered as cured. In most applications, the survival times are assumed to be independent. However, when the survival data are obtained from a multi-centre clinical trial, it is conceived that the environ mental conditions and facilities shared within clinic affects the proportion cured as well as the failure risk for the uncured individuals. It necessitates a long-term survivor mixture model with random effects. In this paper, the long-term survivor mixture model is extended for the analysis of multivariate failure time data using the generalized linear mixed model (GLMM) approach. The proposed model is applied to analyse a numerical data set from a multi-centre clinical trial of carcinoma as an illustration. Some simulation experiments are performed to assess the applicability of the model based on the average biases of the estimates formed. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Proportionally balanced designs were introduced by Gray and Matters in response to a need for the allocation of markers of the Queensland Core Skills Test to have a certain property. Put simply, markers were allocated to pairs of units in proportions that reflected the relative numbers of markers allocated in total to each unit. In this paper, the first author extends the theoretical results relating to such designs and provides further instances, and two general constructions, in the case that the design comprises blocks of precisely two sizes.
Resumo:
Adenomas are the precursors of most colorectal cancers. Hyperplastic polyps have been linked to the subset of colorectal cancers showing DNA microsatellite instability, but little is known of their underlying genetic etiology. Using a strategy that isolates differentially methylated sequences from hyperplastic polyps and normal mucosa, we identified a 370-bp sequence containing the 5' untranslated region and the first exon of a gene that we have called HPP1. Rapid amplification of cDNA ends was used to isolate HPP1 from normal mucose. Using reverse transcription-PCR, HPP1 was expressed in 28 of 30 (93%) normal colonic samples but in only seven of 30 (23%) colorectal cancers (P < 0.001). The 5' region of HPP1 included a CpG island containing 49 CpG sites, of which 96% were found to be methylated by bisulfite sequencing of DNA from colonic tumor samples. By COBRA analysis, methylation was detected in six of nine (66%) adenomas, 17 of 27 (63%) hyperplastic polyps, and 46 of 55 (84%) colorectal cancers. There was an inverse relationship between methylation level and mRNA expression in cancers (r = -0.67; P < 0.001), and 5-aza-2-deoxycytidine treatment restored HPP1 expression in two colorectal cancer cell lines. In situ hybridization of HPP1 indicated that expression occurs in epithelial and stromal elements in normal mucosa but is silenced in both cell types in early colonic neoplasia. HPP1 is predicted to encode a transmembrane protein containing follistatin and epidermal growth factor-like domains. Silencing of HPP1 by methylation may increase the probability of neoplastic transformation.
Resumo:
Consider a tandem system of machines separated by infinitely large buffers. The machines process a continuous flow of products, possibly at different speeds. The life and repair times of the machines are assumed to be exponential. We claim that the overflow probability of each buffer has an exponential decay, and provide an algorithm to determine the exact decay rates in terms of the speeds and the failure and repair rates of the machines. These decay rates provide useful qualitative insight into the behavior of the flow line. In the derivation of the algorithm we use the theory of Large Deviations.
Resumo:
We use a stochastic patch occupancy model of invertebrates in the Mound Springs ecosystem of South Australia to assess the ability of incidence function models to detect environmental impacts on metapopulations. We assume that the probability of colonisation decreases with increasing isolation and the probability of extinction is constant across spring vents. We run the models to quasi-equilibrium, and then impose an impact by increasing the local extinction probability. We sample the output at various times pre- and postimpact, and examine the probability of detecting a significant change in population parameters. The incidence function model approach turns out to have little power to detect environmental impacts on metapopulations with small numbers of patches. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A recent study by Brook ef al. empirically tested the performance of population viability analysis (PVA) using data from 21 populations across a wide range of species. The study concluded that PVAs are good at predicting the future dynamics of populations. We suggest that this conclusion is a result of a bias in the studies that Brook et al, included in their analyses, We present arguments that PVAs can only be accurate at predicting extinction probabilities if data are extensive and reliable, and if the distribution of vital rates between individuals and years can be assumed stationary in the future, or if any changes can be accurately predicted. In particular, we note th at although catastrophes are likely to have precipitated many extinctions, estimates of the probability of catastrophes are unreliable.
Resumo:
enin et al. (2000) recently introduced the idea of similarity in the context of birth-death processes. This paper examines the extent to which their results can be extended to arbitrary Markov chains. It is proved that, under a variety of conditions, similar chains are strongly similar in a sense which is described, and it is shown that minimal chains are strongly similar if and only if the corresponding transition-rate matrices are strongly similar. A general framework is given for constructing families of strongly similar chains; it permits the construction of all such chains in the irreducible case.
Resumo:
In this note we show by counter-example that the direct product of two weak uniquely completable partial latin squares is not necessarily a uniquely completable partial latin square. This counter-example rejects a conjecture by Gower (see [3]) on the direct product of two uniquely completable partial latin squares.
Resumo:
The number of 1-factors (near 1-factors) that mu 1-factorizations (near 1-factorizations) of the complete graph K-v, v even (v odd), can have in common, is studied. The problem is completely settled for mu = 2 and mu = 3.
Resumo:
This paper develops a general framework for valuing a wide range of derivative securities. Rather than focusing on the stochastic process of the underlying security and developing an instantaneously-riskless hedge portfolio, we focus on the terminal distribution of the underlying security. This enables the derivative security to be valued as the weighted sum of a number of component pieces. The component pieces are simply the different payoffs that the security generates in different states of the world, and they are weighted by the probability of the particular state of the world occurring. A full set of derivations is provided. To illustrate its use, the valuation framework is applied to plain-vanilla call and put options, as well as a range of derivatives including caps, floors, collars, supershares, and digital options.