976 resultados para Algorithmic Probability
Resumo:
The extent to which density-dependent processes regulate natural populations is the subject of an ongoing debate. We contribute evidence to this debate showing that density-dependent processes influence the population dynamics of the ectoparasite Aponomma hydrosauri (Acari: Ixodidae), a tick species that infests reptiles in Australia. The first piece of evidence comes from an unusually long-term dataset on the distribution of ticks among individual hosts. If density-dependent processes are influencing either host mortality or vital rates of the parasite population, and those distributions can be approximated with negative binomial distributions, then general host-parasite models predict that the aggregation coefficient of the parasite distribution will increase with the average intensity of infections. We fit negative binomial distributions to the frequency distributions of ticks on hosts, and find that the estimated aggregation coefficient k increases with increasing average tick density. This pattern indirectly implies that one or more vital rates of the tick population must be changing with increasing tick density, because mortality rates of the tick's main host, the sleepy lizard, Tiliqua rugosa, are unaffected by changes in tick burdens. Our second piece of evidence is a re-analysis of experimental data on the attachment success of individual ticks to lizard hosts using generalized linear modelling. The probability of successful engorgement decreases with increasing numbers of ticks attached to a host. This is direct evidence of a density-dependent process that could lead to an increase in the aggregation coefficient of tick distributions described earlier. The population-scale increase in the aggregation coefficient is indirect evidence of a density-dependent process or processes sufficiently strong to produce a population-wide pattern, and thus also likely to influence population regulation. The direct observation of a density-dependent process is evidence of at least part of the responsible mechanism.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
Resumo:
We present an abstract model of the leader election protocol used in the IEEE 1394 High Performance Serial Bus standard. The model is expressed in the probabilistic Guarded Command Language. By formal reasoning based on this description, we establish the probability of the root contention part of the protocol successfully terminating in terms of the number of attempts to do so. Some simple calculations then allow us to establish an upper bound on the time taken for those attempts.
Resumo:
A more efficient classifying cyclone (CC) for fine particle classification has been developed in recent years at the JKMRC. The novel CC, known as the JKCC, has modified profiles of the cyclone body, vortex finder, and spigot when compared to conventional hydrocyclones. The novel design increases the centrifugal force inside the cyclone and mitigates the short circuiting flow that exists in all current cyclones. It also decreases the probability of particle contamination in the place near the cyclone spigot. Consequently the cyclone efficiency is improved while the unit maintains a simple structure. An international patent has been granted for this novel cyclone design. In the first development stage-a feasibility study-a 100 mm JKCC was tested and compared with two 100 min commercial units. Very encouraging results were achieved, indicating good potential for the novel design. In the second development stage-a scale-up stage-the JKCC was scaled up to 200 mm in diameter, and its geometry was optimized through numerous tests. The performance of the JKCC was compared with a 150 nun commercial unit and exhibited sharper separation, finer separation size, and lower flow ratios. The JKCC is now being scaled up into a fill-size (480 mm) hydrocyclone in the third development stage-an industrial study. The 480 mm diameter unit will be tested in an Australian coal preparation plant, and directly compared with a commercial CC operating under the same conditions. Classifying cyclone performance for fine coal could be further improved if the unit is installed in an inclined position. The study using the 200 mm JKCC has revealed that sharpness of separation improved and the flow ratio to underflow was decreased by 43% as the cyclone inclination was varied from the vertical position (0degrees) to the horizontal position (90degrees). The separation size was not affected, although the feed rate was slightly decreased. To ensure self-emptying upon shutdown, it is recommended that the JKCC be installed at an inclination of 75-80degrees. At this angle the cyclone performance is very similar to that at a horizontal position. Similar findings have been derived from the testing of a conventional hydrocyclone. This may be of benefit to operations that require improved performance from their classifying cyclones in terms of sharpness of separation and flow ratio, while tolerating slightly reduced feed rate.
Resumo:
A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.
Resumo:
A new lifetime distribution capable of modeling a bathtub-shaped hazard-rate function is proposed. The proposed model is derived as a limiting case of the Beta Integrated Model and has both the Weibull distribution and Type I extreme value distribution as special cases. The model can be considered as another useful 3-parameter generalization of the Weibull distribution. An advantage of the model is that the model parameters can be estimated easily based on a Weibull probability paper (WPP) plot that serves as a tool for model identification. Model characterization based on the WPP plot is studied. A numerical example is provided and comparison with another Weibull extension, the exponentiated Weibull, is also discussed. The proposed model compares well with other competing models to fit data that exhibits a bathtub-shaped hazard-rate function.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.
Resumo:
For zygosity diagnosis in the absence of genotypic data, or in the recruitment phase of a twin study where only single twins from same-sex pairs are being screened, or to provide a test for sample duplication leading to the false identification of a dizygotic pair as monozygotic, the appropriate analysis of respondents' answers to questions about zygosity is critical. Using data from a young adult Australian twin cohort (N = 2094 complete pairs and 519 singleton twins from same-sex pairs with complete responses to all zygosity items), we show that application of latent class analysis (LCA), fitting a 2-class model, yields results that show good concordance with traditional methods of zygosity diagnosis, but with certain important advantages. These include the ability, in many cases, to assign zygosity with specified probability on the basis of responses of a single informant (advantageous when one zygosity type is being oversampled); and the ability to quantify the probability of misassignment of zygosity, allowing prioritization of cases for genotyping as well as identification of cases of probable laboratory error. Out of 242 twins (from 121 like-sex pairs) where genotypic data were available for zygosity confirmation, only a single case was identified of incorrect zygosity assignment by the latent class algorithm. Zygosity assignment for that single case was identified by the LCA as uncertain (probability of being a monozygotic twin only 76%), and the co-twin's responses clearly identified the pair as dizygotic (probability of being dizygotic 100%). In the absence of genotypic data, or as a safeguard against sample duplication, application of LCA for zygosity assignment or confirmation is strongly recommended.
Resumo:
Chlorophyll fluorescence measurements have a wide range of applications from basic understanding of photosynthesis functioning to plant environmental stress responses and direct assessments of plant health. The measured signal is the fluorescence intensity (expressed in relative units) and the most meaningful data are derived from the time dependent increase in fluorescence intensity achieved upon application of continuous bright light to a previously dark adapted sample. The fluorescence response changes over time and is termed the Kautsky curve or chlorophyll fluorescence transient. Recently, Strasser and Strasser (1995) formulated a group of fluorescence parameters, called the JIP-test, that quantify the stepwise flow of energy through Photosystem II, using input data from the fluorescence transient. The purpose of this study was to establish relationships between the biochemical reactions occurring in PS II and specific JIP-test parameters. This was approached using isolated systems that facilitated the addition of modifying agents, a PS II electron transport inhibitor, an electron acceptor and an uncoupler, whose effects on PS II activity are well documented in the literature. The alteration to PS II activity caused by each of these compounds could then be monitored through the JIP-test parameters and compared and contrasted with the literature. The known alteration in PS II activity of Chenopodium album atrazine resistant and sensitive biotypes was also used to gauge the effectiveness and sensitivity of the JIP-test. The information gained from the in vitro study was successfully applied to an in situ study. This is the first in a series of four papers. It shows that the trapping parameters of the JIP-test were most affected by illumination and that the reduction in trapping had a run-on effect to inhibit electron transport. When irradiance exposure proceeded to photoinhibition, the electron transport probability parameter was greatly reduced and dissipation significantly increased. These results illustrate the advantage of monitoring a number of fluorescence parameters over the use of just one, which is often the case when the F-V/F-M ratio is used.
Resumo:
A theta graph is a graph consisting of three pairwise internally disjoint paths with common end points. Methods for decomposing the complete graph K-nu into theta graphs with fewer than ten edges are given.
Resumo:
We describe a direct method of partitioning the 840 Steiner triple systems of order 9 into 120 large sets. The method produces partitions in which all of the large sets are isomorphic and we apply the method to each of the two non-isomorphic large sets of STS(9).
Resumo:
OBJECTIVES We sought to use quantitative markers of the regional left ventricular (LV) response to stress to infer whether diabetic cardiomyopathy is associated with ischemia. BACKGROUND Diabetic cardiomyopathy has been identified in clinical and experimental studies, but its cause remains unclear. METHODS We studied 41 diabetic patients with normal resting LV function and a normal dobutamine echo and 41 control subjects with a low probability of coronary disease. Peak myocardial systolic velocity (Sm) and early diastolic velocity (Em) in each segment were averaged, and mean Sm and Em were compared between diabetic patients and controls and among different stages of dobutamine stress. RESULTS Both Sm and Em progressively increased from rest to peak dobutamine stress. In the diabetic group, Sm was significantly lower than in control subjects at baseline (4.2 +/- 0.9 cm/s vs. 4.7 +/- 0.9 cm/s, p = 0.012). However, Sin at a low dose (6.0 +/- 1.3), before peak (8.4 +/- 1.8), and at peak stress (8.9 +/- 1.8) in diabetic patients was not significantly different from that of controls (6.3 +/- 1.4, 8.9 +/- 1.6, and 9.6 +/- 2.1 cm/s, respectively). The Em (cm/s) in the diabetic group (rest: 4.2 +/- 1.2; low dose: 5.0 +/- 1.4; pre-peak: 5.3 +/- 1.1; peak: 5.9 +/- 1.5) was significantly lower than that of controls (rest: 5.8 +/- 1.5; low dose: 6.6 +/- 1.5; pre-peak: 6.9 +/- 1.3; peak: 7.3 +/- 1.7; all p < 0.001). However, the absolute and relative increases in Sm or Em from rest to peak stress were similar in diabetic and control groups. CONCLUSIONS Subtle LV dysfunction is present in diabetic patients without overt cardiac disease. The normal response to stress suggests that ischemia due to small-vessel disease may not be important in early diabetic heart muscle disease. (C) 2003 by the American College of Cardiology Foundation.