188 resultados para Percolation probability


Relevância:

10.00% 10.00%

Publicador:

Resumo:

For Markov processes on the positive integers with the origin as an absorbing state, Ferrari, Kesten, Martinez and Picco studied the existence of quasi-stationary and limiting conditional distributions by characterizing quasi-stationary distributions as fixed points of a transformation Phi on the space of probability distributions on {1, 2,.. }. In the case of a birth-death process, the components of Phi(nu) can be written down explicitly for any given distribution nu. Using this explicit representation, we will show that Phi preserves likelihood ratio ordering between distributions. A conjecture of Kryscio and Lefevre concerning the quasi-stationary distribution of the SIS logistic epidemic follows as a corollary.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A decision theory framework can be a powerful technique to derive optimal management decisions for endangered species. We built a spatially realistic stochastic metapopulation model for the Mount Lofty Ranges Southern Emu-wren (Stipiturus malachurus intermedius), a critically endangered Australian bird. Using diserete-time Markov,chains to describe the dynamics of a metapopulation and stochastic dynamic programming (SDP) to find optimal solutions, we evaluated the following different management decisions: enlarging existing patches, linking patches via corridors, and creating a new patch. This is the first application of SDP to optimal landscape reconstruction and one of the few times that landscape reconstruction dynamics have been integrated with population dynamics. SDP is a powerful tool that has advantages over standard Monte Carlo simulation methods because it can give the exact optimal strategy for every landscape configuration (combination of patch areas and presence of corridors) and pattern of metapopulation occupancy, as well as a trajectory of strategies. It is useful when a sequence of management actions can be performed over a given time horizon, as is the case for many endangered species recovery programs, where only fixed amounts of resources are available in each time step. However, it is generally limited by computational constraints to rather small networks of patches. The model shows that optimal metapopulation, management decisions depend greatly on the current state of the metapopulation,. and there is no strategy that is universally the best. The extinction probability over 30 yr for the optimal state-dependent management actions is 50-80% better than no management, whereas the best fixed state-independent sets of strategies are only 30% better than no management. This highlights the advantages of using a decision theory tool to investigate conservation strategies for metapopulations. It is clear from these results that the sequence of management actions is critical, and this can only be effectively derived from stochastic dynamic programming. The model illustrates the underlying difficulty in determining simple rules of thumb for the sequence of management actions for a metapopulation. This use of a decision theory framework extends the capacity of population viability analysis (PVA) to manage threatened species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The extent to which density-dependent processes regulate natural populations is the subject of an ongoing debate. We contribute evidence to this debate showing that density-dependent processes influence the population dynamics of the ectoparasite Aponomma hydrosauri (Acari: Ixodidae), a tick species that infests reptiles in Australia. The first piece of evidence comes from an unusually long-term dataset on the distribution of ticks among individual hosts. If density-dependent processes are influencing either host mortality or vital rates of the parasite population, and those distributions can be approximated with negative binomial distributions, then general host-parasite models predict that the aggregation coefficient of the parasite distribution will increase with the average intensity of infections. We fit negative binomial distributions to the frequency distributions of ticks on hosts, and find that the estimated aggregation coefficient k increases with increasing average tick density. This pattern indirectly implies that one or more vital rates of the tick population must be changing with increasing tick density, because mortality rates of the tick's main host, the sleepy lizard, Tiliqua rugosa, are unaffected by changes in tick burdens. Our second piece of evidence is a re-analysis of experimental data on the attachment success of individual ticks to lizard hosts using generalized linear modelling. The probability of successful engorgement decreases with increasing numbers of ticks attached to a host. This is direct evidence of a density-dependent process that could lead to an increase in the aggregation coefficient of tick distributions described earlier. The population-scale increase in the aggregation coefficient is indirect evidence of a density-dependent process or processes sufficiently strong to produce a population-wide pattern, and thus also likely to influence population regulation. The direct observation of a density-dependent process is evidence of at least part of the responsible mechanism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an abstract model of the leader election protocol used in the IEEE 1394 High Performance Serial Bus standard. The model is expressed in the probabilistic Guarded Command Language. By formal reasoning based on this description, we establish the probability of the root contention part of the protocol successfully terminating in terms of the number of attempts to do so. Some simple calculations then allow us to establish an upper bound on the time taken for those attempts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A more efficient classifying cyclone (CC) for fine particle classification has been developed in recent years at the JKMRC. The novel CC, known as the JKCC, has modified profiles of the cyclone body, vortex finder, and spigot when compared to conventional hydrocyclones. The novel design increases the centrifugal force inside the cyclone and mitigates the short circuiting flow that exists in all current cyclones. It also decreases the probability of particle contamination in the place near the cyclone spigot. Consequently the cyclone efficiency is improved while the unit maintains a simple structure. An international patent has been granted for this novel cyclone design. In the first development stage-a feasibility study-a 100 mm JKCC was tested and compared with two 100 min commercial units. Very encouraging results were achieved, indicating good potential for the novel design. In the second development stage-a scale-up stage-the JKCC was scaled up to 200 mm in diameter, and its geometry was optimized through numerous tests. The performance of the JKCC was compared with a 150 nun commercial unit and exhibited sharper separation, finer separation size, and lower flow ratios. The JKCC is now being scaled up into a fill-size (480 mm) hydrocyclone in the third development stage-an industrial study. The 480 mm diameter unit will be tested in an Australian coal preparation plant, and directly compared with a commercial CC operating under the same conditions. Classifying cyclone performance for fine coal could be further improved if the unit is installed in an inclined position. The study using the 200 mm JKCC has revealed that sharpness of separation improved and the flow ratio to underflow was decreased by 43% as the cyclone inclination was varied from the vertical position (0degrees) to the horizontal position (90degrees). The separation size was not affected, although the feed rate was slightly decreased. To ensure self-emptying upon shutdown, it is recommended that the JKCC be installed at an inclination of 75-80degrees. At this angle the cyclone performance is very similar to that at a horizontal position. Similar findings have been derived from the testing of a conventional hydrocyclone. This may be of benefit to operations that require improved performance from their classifying cyclones in terms of sharpness of separation and flow ratio, while tolerating slightly reduced feed rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new lifetime distribution capable of modeling a bathtub-shaped hazard-rate function is proposed. The proposed model is derived as a limiting case of the Beta Integrated Model and has both the Weibull distribution and Type I extreme value distribution as special cases. The model can be considered as another useful 3-parameter generalization of the Weibull distribution. An advantage of the model is that the model parameters can be estimated easily based on a Weibull probability paper (WPP) plot that serves as a tool for model identification. Model characterization based on the WPP plot is studied. A numerical example is provided and comparison with another Weibull extension, the exponentiated Weibull, is also discussed. The proposed model compares well with other competing models to fit data that exhibits a bathtub-shaped hazard-rate function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For zygosity diagnosis in the absence of genotypic data, or in the recruitment phase of a twin study where only single twins from same-sex pairs are being screened, or to provide a test for sample duplication leading to the false identification of a dizygotic pair as monozygotic, the appropriate analysis of respondents' answers to questions about zygosity is critical. Using data from a young adult Australian twin cohort (N = 2094 complete pairs and 519 singleton twins from same-sex pairs with complete responses to all zygosity items), we show that application of latent class analysis (LCA), fitting a 2-class model, yields results that show good concordance with traditional methods of zygosity diagnosis, but with certain important advantages. These include the ability, in many cases, to assign zygosity with specified probability on the basis of responses of a single informant (advantageous when one zygosity type is being oversampled); and the ability to quantify the probability of misassignment of zygosity, allowing prioritization of cases for genotyping as well as identification of cases of probable laboratory error. Out of 242 twins (from 121 like-sex pairs) where genotypic data were available for zygosity confirmation, only a single case was identified of incorrect zygosity assignment by the latent class algorithm. Zygosity assignment for that single case was identified by the LCA as uncertain (probability of being a monozygotic twin only 76%), and the co-twin's responses clearly identified the pair as dizygotic (probability of being dizygotic 100%). In the absence of genotypic data, or as a safeguard against sample duplication, application of LCA for zygosity assignment or confirmation is strongly recommended.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chlorophyll fluorescence measurements have a wide range of applications from basic understanding of photosynthesis functioning to plant environmental stress responses and direct assessments of plant health. The measured signal is the fluorescence intensity (expressed in relative units) and the most meaningful data are derived from the time dependent increase in fluorescence intensity achieved upon application of continuous bright light to a previously dark adapted sample. The fluorescence response changes over time and is termed the Kautsky curve or chlorophyll fluorescence transient. Recently, Strasser and Strasser (1995) formulated a group of fluorescence parameters, called the JIP-test, that quantify the stepwise flow of energy through Photosystem II, using input data from the fluorescence transient. The purpose of this study was to establish relationships between the biochemical reactions occurring in PS II and specific JIP-test parameters. This was approached using isolated systems that facilitated the addition of modifying agents, a PS II electron transport inhibitor, an electron acceptor and an uncoupler, whose effects on PS II activity are well documented in the literature. The alteration to PS II activity caused by each of these compounds could then be monitored through the JIP-test parameters and compared and contrasted with the literature. The known alteration in PS II activity of Chenopodium album atrazine resistant and sensitive biotypes was also used to gauge the effectiveness and sensitivity of the JIP-test. The information gained from the in vitro study was successfully applied to an in situ study. This is the first in a series of four papers. It shows that the trapping parameters of the JIP-test were most affected by illumination and that the reduction in trapping had a run-on effect to inhibit electron transport. When irradiance exposure proceeded to photoinhibition, the electron transport probability parameter was greatly reduced and dissipation significantly increased. These results illustrate the advantage of monitoring a number of fluorescence parameters over the use of just one, which is often the case when the F-V/F-M ratio is used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A theta graph is a graph consisting of three pairwise internally disjoint paths with common end points. Methods for decomposing the complete graph K-nu into theta graphs with fewer than ten edges are given.