169 resultados para distribution (probability theory)
em University of Queensland eSpace - Australia
Resumo:
Mineral processing plants use two main processes; these are comminution and separation. The objective of the comminution process is to break complex particles consisting of numerous minerals into smaller simpler particles where individual particles consist primarily of only one mineral. The process in which the mineral composition distribution in particles changes due to breakage is called 'liberation'. The purpose of separation is to separate particles consisting of valuable mineral from those containing nonvaluable mineral. The energy required to break particles to fine sizes is expensive, and therefore the mineral processing engineer must design the circuit so that the breakage of liberated particles is reduced in favour of breaking composite particles. In order to effectively optimize a circuit through simulation it is necessary to predict how the mineral composition distributions change due to comminution. Such a model is called a 'liberation model for comminution'. It was generally considered that such a model should incorporate information about the ore, such as the texture. However, the relationship between the feed and product particles can be estimated using a probability method, with the probability being defined as the probability that a feed particle of a particular composition and size will form a particular product particle of a particular size and composition. The model is based on maximizing the entropy of the probability subject to mass constraints and composition constraint. Not only does this methodology allow a liberation model to be developed for binary particles, but also for particles consisting of many minerals. Results from applying the model to real plant ore are presented. A laboratory ball mill was used to break particles. The results from this experiment were used to estimate the kernel which represents the relationship between parent and progeny particles. A second feed, consisting primarily of heavy particles subsampled from the main ore was then ground through the same mill. The results from the first experiment were used to predict the product of the second experiment. The agreement between the predicted results and the actual results are very good. It is therefore recommended that more extensive validation is needed to fully evaluate the substance of the method. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Stochastic models based on Markov birth processes are constructed to describe the process of invasion of a fly larva by entomopathogenic nematodes. Various forms for the birth (invasion) rates are proposed. These models are then fitted to data sets describing the observed numbers of nematodes that have invaded a fly larval after a fixed period of time. Non-linear birthrates are required to achieve good fits to these data, with their precise form leading to different patterns of invasion being identified for three populations of nematodes considered. One of these (Nemasys) showed the greatest propensity for invasion. This form of modelling may be useful more generally for analysing data that show variation which is different from that expected from a binomial distribution.
Resumo:
Argumentation is modelled as a game where the payoffs are measured in terms of the probability that the claimed conclusion is, or is not, defeasibly provable, given a history of arguments that have actually been exchanged, and given the probability of the factual premises. The probability of a conclusion is calculated using a standard variant of Defeasible Logic, in combination with standard probability calculus. It is a new element of the present approach that the exchange of arguments is analysed with game theoretical tools, yielding a prescriptive and to some extent even predictive account of the actual course of play. A brief comparison with existing argument-based dialogue approaches confirms that such a prescriptive account of the actual argumentation has been almost lacking in the approaches proposed so far.
Resumo:
Cox's theorem states that, under certain assumptions, any measure of belief is isomorphic to a probability measure. This theorem, although intended as a justification of the subjectivist interpretation of probability theory, is sometimes presented as an argument for more controversial theses. Of particular interest is the thesis that the only coherent means of representing uncertainty is via the probability calculus. In this paper I examine the logical assumptions of Cox's theorem and I show how these impinge on the philosophical conclusions thought to be supported by the theorem. I show that the more controversial thesis is not supported by Cox's theorem. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
The argument from fine tuning is supposed to establish the existence of God from the fact that the evolution of carbon-based life requires the laws of physics and the boundary conditions of the universe to be more or less as they are. We demonstrate that this argument fails. In particular, we focus on problems associated with the role probabilities play in the argument. We show that, even granting the fine tuning of the universe, it does not follow that the universe is improbable, thus no explanation of the fine tuning, theistic or otherwise, is required.
Resumo:
Statistics is known to be an art as well as a science. The training of mathematical physicists predisposes them towards hypothesising plausible Bayesean priors. Tony Bracken and I were of that mind [1], but in our discussions we also recognised the Bayesean will-o'-the-wisp illustrated below.
Resumo:
We consider a buying-selling problem when two stops of a sequence of independent random variables are required. An optimal stopping rule and the value of a game are obtained.
Resumo:
We consider the statistical properties of the local density of states of a one-dimensional Dirac equation in the presence of various types of disorder with Gaussian white-noise distribution. It is shown how either the replica trick or supersymmetry can be used to calculate exactly all the moments of the local density of states.' Careful attention is paid to how the results change if the local density of states is averaged over atomic length scales. For both the replica trick and supersymmetry the problem is reduced to finding the ground state of a zero-dimensional Hamiltonian which is written solely in terms of a pair of coupled spins which are elements of u(1, 1). This ground state is explicitly found for the particular case of the Dirac equation corresponding to an infinite metallic quantum wire with a single conduction channel. The calculated moments of the local density of states agree with those found previously by Al'tshuler and Prigodin [Sov. Phys. JETP 68 (1989) 198] using a technique based on recursion relations for Feynman diagrams. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Normal mixture models are being increasingly used to model the distributions of a wide variety of random phenomena and to cluster sets of continuous multivariate data. However, for a set of data containing a group or groups of observations with longer than normal tails or atypical observations, the use of normal components may unduly affect the fit of the mixture model. In this paper, we consider a more robust approach by modelling the data by a mixture of t distributions. The use of the ECM algorithm to fit this t mixture model is described and examples of its use are given in the context of clustering multivariate data in the presence of atypical observations in the form of background noise.
Cavity QED analog of the harmonic-oscillator probability distribution function and quantum collapses
Resumo:
We establish a connection between the simple harmonic oscillator and a two-level atom interacting with resonant, quantized cavity and strong driving fields, which suggests an experiment to measure the harmonic-oscillator's probability distribution function. To achieve this, we calculate the Autler-Townes spectrum by coupling the system to a third level. We find that there are two different regions of the atomic dynamics depending on the ratio of the: Rabi frequency Omega (c) of the cavity field to that of the Rabi frequency Omega of the driving field. For Omega (c)
Resumo:
A new approach based on the nonlocal density functional theory to determine pore size distribution (PSD) of activated carbons and energetic heterogeneity of the pore wall is proposed. The energetic heterogeneity is modeled with an energy distribution function (EDF), describing the distribution of solid-fluid potential well depth (this distribution is a Dirac delta function for an energetic homogeneous surface). The approach allows simultaneous determining of the PSD (assuming slit shape) and EDF from nitrogen or argon isotherms at their respective boiling points by using a set of local isotherms calculated for a range of pore widths and solid-fluid potential well depths. It is found that the structure of the pore wall surface significantly differs from that of graphitized carbon black. This could be attributed to defects in the crystalline structure of the surface, active oxide centers, finite size of the pore walls (in either wall thickness or pore length), and so forth. Those factors depend on the precursor and the process of carbonization and activation and hence provide a fingerprint for each adsorbent. The approach allows very accurate correlation of the experimental adsorption isotherm and leads to PSDs that are simpler and more realistic than those obtained with the original nonlocal density functional theory.
Resumo:
There are at least two reasons for a symmetric, unimodal, diffuse tailed hyperbolic secant distribution to be interesting in real-life applications. It displays one of the common types of non normality in natural data and is closely related to the logistic and Cauchy distributions that often arise in practice. To test the difference in location between two hyperbolic secant distributions, we develop a simple linear rank test with trigonometric scores. We investigate the small-sample and asymptotic properties of the test statistic and provide tables of the exact null distribution for small sample sizes. We compare the test to the Wilcoxon two-sample test and show that, although the asymptotic powers of the tests are comparable, the present test has certain practical advantages over the Wilcoxon test.
Resumo:
The generalized secant hyperbolic distribution (GSHD) proposed in Vaughan (2002) includes a wide range of unimodal symmetric distributions, with the Cauchy and uniform distributions being the limiting cases, and the logistic and hyperbolic secant distributions being special cases. The current article derives an asymptotically efficient rank estimator of the location parameter of the GSHD and suggests the corresponding one- and two-sample optimal rank tests. The rank estimator derived is compared to the modified MLE of location proposed in Vaughan (2002). By combining these two estimators, a computationally attractive method for constructing an exact confidence interval of the location parameter is developed. The statistical procedures introduced in the current article are illustrated by examples.