68 resultados para Probability distributions
Resumo:
Part I (Manjunath et al., 1994, Chem. Engng Sci. 49, 1451-1463) of this paper showed that the random particle numbers and size distributions in precipitation processes in very small drops obtained by stochastic simulation techniques deviate substantially from the predictions of conventional population balance. The foregoing problem is considered in this paper in terms of a mean field approximation obtained by applying a first-order closure to an unclosed set of mean field equations presented in Part I. The mean field approximation consists of two mutually coupled partial differential equations featuring (i) the probability distribution for residual supersaturation and (ii) the mean number density of particles for each size and supersaturation from which all average properties and fluctuations can be calculated. The mean field equations have been solved by finite difference methods for (i) crystallization and (ii) precipitation of a metal hydroxide both occurring in a single drop of specified initial supersaturation. The results for the average number of particles, average residual supersaturation, the average size distribution, and fluctuations about the average values have been compared with those obtained by stochastic simulation techniques and by population balance. This comparison shows that the mean field predictions are substantially superior to those of population balance as judged by the close proximity of results from the former to those from stochastic simulations. The agreement is excellent for broad initial supersaturations at short times but deteriorates progressively at larger times. For steep initial supersaturation distributions, predictions of the mean field theory are not satisfactory thus calling for higher-order approximations. The merit of the mean field approximation over stochastic simulation lies in its potential to reduce expensive computation times involved in simulation. More effective computational techniques could not only enhance this advantage of the mean field approximation but also make it possible to use higher-order approximations eliminating the constraints under which the stochastic dynamics of the process can be predicted accurately.
Resumo:
Cum ./LSTA_A_8828879_O_XML_IMAGES/LSTA_A_8828879_O_ILM0001.gif rule [Singh (1975)] has been suggested in the literature for finding approximately optimum strata boundaries for proportional allocation, when the stratification is done on the study variable. This paper shows that for the class of density functions arising from the Wang and Aggarwal (1984) representation of the Lorenz Curve (or DBV curves in case of inventory theory), the cum ./LSTA_A_8828879_O_XML_IMAGES/LSTA_A_8828879_O_ILM0002.gif rule in place of giving approximately optimum strata boundaries, yields exactly optimum boundaries. It is also shown that the conjecture of Mahalanobis (1952) “. . .an optimum or nearly optimum solutions will be obtained when the expected contribution of each stratum to the total aggregate value of Y is made equal for all strata” yields exactly optimum strata boundaries for the case considered in the paper.
Resumo:
An expression is derived for the probability that the determinant of an n x n matrix over a finite field vanishes; from this it is deduced that for a fixed field this probability tends to 1 as n tends to.
Resumo:
The statistical minimum risk pattern recognition problem, when the classification costs are random variables of unknown statistics, is considered. Using medical diagnosis as a possible application, the problem of learning the optimal decision scheme is studied for a two-class twoaction case, as a first step. This reduces to the problem of learning the optimum threshold (for taking appropriate action) on the a posteriori probability of one class. A recursive procedure for updating an estimate of the threshold is proposed. The estimation procedure does not require the knowledge of actual class labels of the sample patterns in the design set. The adaptive scheme of using the present threshold estimate for taking action on the next sample is shown to converge, in probability, to the optimum. The results of a computer simulation study of three learning schemes demonstrate the theoretically predictable salient features of the adaptive scheme.
Resumo:
The distribution of relative velocities between colliding particles in shear flows of inelastic spheres is analysed in the Volume fraction range 0.4-0.64. Particle interactions are considered to be due to instantaneous binary collisions, and the collision model has a normal coefficient of restitution e(n) (negative of the ratio of the post- and pre-collisional relative velocities of the particles along the line joining the centres) and a tangential coefficient of restitution e(t) (negative of the ratio of post- and pre-collisional velocities perpendicular to line joining the centres). The distribution or pre-collisional normal relative velocities (along the line Joining the centres of the particles) is Found to be an exponential distribution for particles with low normal coefficient of restitution in the range 0.6-0.7. This is in contrast to the Gaussian distribution for the normal relative velocity in all elastic fluid in the absence of shear. A composite distribution function, which consists of an exponential and a Gaussian component, is proposed to span the range of inelasticities considered here. In the case of roughd particles, the relative velocity tangential to the surfaces at contact is also evaluated, and it is found to be close to a Gaussian distribution even for highly inelastic particles.Empirical relations are formulated for the relative velocity distribution. These are used to calculate the collisional contributions to the pressure, shear stress and the energy dissipation rate in a shear flow. The results of the calculation were round to be in quantitative agreement with simulation results, even for low coefficients of restitution for which the predictions obtained using the Enskog approximation are in error by an order of magnitude. The results are also applied to the flow down an inclined plane, to predict the angle of repose and the variation of the volume fraction with angle of inclination. These results are also found to be in quantitative agreement with previous simulations.
Resumo:
Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.
Resumo:
Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.
Resumo:
Using the concept of energy-dependent effective field intensity, electron transport coefficients in nitrogen have been determined in E times B fields (E = electric field intensity, B = magnetic flux density) by the numerical solution of the Boltzmann transport equation for the energy distribution of electrons. It has been observed that as the value of B/p (p = gas pressure) is increased from zero, the perpendicular drift velocity increased linearly at first, reaches a maximum value, and then decreases with increasing B/p. In general, the electron mean energy is found to be a function of Eavet/p( Eavet = averaged effective electric field intensity) only, but the other transport coefficients, such as transverse drift velocity, perpendicular drift velocity, and the Townsend ionization coefficient, are functions of both E/p and B/p.
Resumo:
We derive a very general expression of the survival probability and the first passage time distribution for a particle executing Brownian motion in full phase space with an absorbing boundary condition at a point in the position space, which is valid irrespective of the statistical nature of the dynamics. The expression, together with the Jensen's inequality, naturally leads to a lower bound to the actual survival probability and an approximate first passage time distribution. These are expressed in terms of the position-position, velocity-velocity, and position-velocity variances. Knowledge of these variances enables one to compute a lower bound to the survival probability and consequently the first passage distribution function. As examples, we compute these for a Gaussian Markovian process and, in the case of non-Markovian process, with an exponentially decaying friction kernel and also with a power law friction kernel. Our analysis shows that the survival probability decays exponentially at the long time irrespective of the nature of the dynamics with an exponent equal to the transition state rate constant.
Resumo:
We set up Wigner distributions for N-state quantum systems following a Dirac-inspired approach. In contrast to much of the work in this study, requiring a 2N x 2N phase space, particularly when N is even, our approach is uniformly based on an N x N phase-space grid and thereby avoids the necessity of having to invoke a `quadrupled' phase space and hence the attendant redundance. Both N odd and even cases are analysed in detail and it is found that there are striking differences between the two. While the N odd case permits full implementation of the marginal property, the even case does so only in a restricted sense. This has the consequence that in the even case one is led to several equally good definitions of the Wigner distributions as opposed to the odd case where the choice turns out to be unique.
Resumo:
Experiments are carried out with air as the test gas to obtain the surface convective heating rate on a missile shaped body flying at hypersonic speeds. The effect of fins on the surface heating rates of missile frustum is also investigated. The tests are performed in a hypersonic shock tunnel at stagnation enthalpy of 2 MJ/kg and zero degree angle of attack. The experiments are conducted at flow Mach number of 5.75 and 8 with an effective test time of 1 ms. The measured stagnation-point heat-transfer data compares well with the theoretical value estimated using Fay and Riddell expression. The measured heat-transfer rate with fin configuration is slightly higher than that of model without fin. The normalized values of experimentally measured heat transfer rate and Stanton number compare well with the numerically estimated results. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
We consider the problem of detecting statistically significant sequential patterns in multineuronal spike trains. These patterns are characterized by ordered sequences of spikes from different neurons with specific delays between spikes. We have previously proposed a data-mining scheme to efficiently discover such patterns, which occur often enough in the data. Here we propose a method to determine the statistical significance of such repeating patterns. The novelty of our approach is that we use a compound null hypothesis that not only includes models of independent neurons but also models where neurons have weak dependencies. The strength of interaction among the neurons is represented in terms of certain pair-wise conditional probabilities. We specify our null hypothesis by putting an upper bound on all such conditional probabilities. We construct a probabilistic model that captures the counting process and use this to derive a test of significance for rejecting such a compound null hypothesis. The structure of our null hypothesis also allows us to rank-order different significant patterns. We illustrate the effectiveness of our approach using spike trains generated with a simulator.
Resumo:
Gene expression noise results in protein number distributions ranging from long-tailed to Gaussian. We show how long-tailed distributions arise from a stochastic model of the constituent chemical reactions and suggest that, in conjunction with cooperative switches, they lead to more sensitive selection of a subpopulation of cells with high protein number than is possible with Gaussian distributions. Single-cell-tracking experiments are presented to validate some of the assumptions of the stochastic simulations. We also examine the effect of DNA looping on the shape of protein distributions. We further show that when switches are incorporated in the regulation of a gene via a feedback loop, the distributions can become bimodal. This might explain the bimodal distribution of certain morphogens during early embryogenesis.
Resumo:
The probability that a random process crosses an arbitrary level for the first time is expressed as a Gram—Charlier series, the leading term of which is the Poisson approximation. The coefficients of this series are related to the moments of the number of level crossings. The results are applicable to both stationary and non-stationary processes. Some numerical results are presented for the response process of a linear single-degree-of-freedom oscillator under Gaussian white noise excitation.
Resumo:
We report numerical and analytic results for the spatial survival probability for fluctuating one-dimensional interfaces with Edwards-Wilkinson or Kardar-Parisi-Zhang dynamics in the steady state. Our numerical results are obtained from analysis of steady-state profiles generated by integrating a spatially discretized form of the Edwards-Wilkinson equation to long times. We show that the survival probability exhibits scaling behavior in its dependence on the system size and the "sampling interval" used in the measurement for both "steady-state" and "finite" initial conditions. Analytic results for the scaling functions are obtained from a path-integral treatment of a formulation of the problem in terms of one-dimensional Brownian motion. A "deterministic approximation" is used to obtain closed-form expressions for survival probabilities from the formally exact analytic treatment. The resulting approximate analytic results provide a fairly good description of the numerical data.