833 resultados para PROBABILITY REPRESENTATION


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Capacity probability models of generating units are commonly used in many power system reliability studies, at hierarchical level one (HLI). Analytical modelling of a generating system with many units or generating units with many derated states in a system, can result in an extensive number of states in the capacity model. Limitations on available memory and computational time of present computer facilities can pose difficulties for assessment of such systems in many studies. A cluster procedure using the nearest centroid sorting method was used for IEEE-RTS load model. The application proved to be very effective in producing a highly similar model with substantially fewer states. This paper presents an extended application of the clustering method to include capacity probability representation. A series of sensitivity studies are illustrated using IEEE-RTS generating system and load models. The loss of load and energy expectations (LOLE, LOEE), are used as indicators to evaluate the application

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present the results of an operational use of experimentally measured optical tomograms to determine state characteristics (purity) avoiding any reconstruction of quasiprobabilities. We also develop a natural way how to estimate the errors (including both statistical and systematic ones) by an analysis of the experimental data themselves. Precision of the experiment can be increased by postselecting the data with minimal (systematic) errors. We demonstrate those techniques by considering coherent and photon-added coherent states measured via the time-domain improved homodyne detection. The operational use and precision of the data allowed us to check purity-dependent uncertainty relations and uncertainty relations for Shannon and Renyi entropies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper I will investigate the conditions under which a convex capacity (or a non-additive probability which exhibts uncertainty aversion) can be represented as a squeeze of a(n) (additive) probability measure associate to an uncertainty aversion function. Then I will present two alternatives forrnulations of the Choquet integral (and I will extend these forrnulations to the Choquet expected utility) in a parametric approach that will enable me to do comparative static exercises over the uncertainty aversion function in an easy way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a robust stochastic framework for the incorporation of visual observations into conventional estimation, data fusion, navigation and control algorithms. The representation combines Isomap, a non-linear dimensionality reduction algorithm, with expectation maximization, a statistical learning scheme. The joint probability distribution of this representation is computed offline based on existing training data. The training phase of the algorithm results in a nonlinear and non-Gaussian likelihood model of natural features conditioned on the underlying visual states. This generative model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The instantiated likelihoods are expressed as a Gaussian mixture model and are conveniently integrated within existing non-linear filtering algorithms. Example applications based on real visual data from heterogenous, unstructured environments demonstrate the versatility of the generative models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background In order to provide insights into the complex biochemical processes inside a cell, modelling approaches must find a balance between achieving an adequate representation of the physical phenomena and keeping the associated computational cost within reasonable limits. This issue is particularly stressed when spatial inhomogeneities have a significant effect on system's behaviour. In such cases, a spatially-resolved stochastic method can better portray the biological reality, but the corresponding computer simulations can in turn be prohibitively expensive. Results We present a method that incorporates spatial information by means of tailored, probability distributed time-delays. These distributions can be directly obtained by single in silico or a suitable set of in vitro experiments and are subsequently fed into a delay stochastic simulation algorithm (DSSA), achieving a good compromise between computational costs and a much more accurate representation of spatial processes such as molecular diffusion and translocation between cell compartments. Additionally, we present a novel alternative approach based on delay differential equations (DDE) that can be used in scenarios of high molecular concentrations and low noise propagation. Conclusions Our proposed methodologies accurately capture and incorporate certain spatial processes into temporal stochastic and deterministic simulations, increasing their accuracy at low computational costs. This is of particular importance given that time spans of cellular processes are generally larger (possibly by several orders of magnitude) than those achievable by current spatially-resolved stochastic simulators. Hence, our methodology allows users to explore cellular scenarios under the effects of diffusion and stochasticity in time spans that were, until now, simply unfeasible. Our methodologies are supported by theoretical considerations on the different modelling regimes, i.e. spatial vs. delay-temporal, as indicated by the corresponding Master Equations and presented elsewhere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sampling design is critical to the quality of quantitative research, yet it does not always receive appropriate attention in nursing research. The current article details how balancing probability techniques with practical considerations produced a representative sample of Australian nursing homes (NHs). Budgetary, logistical, and statistical constraints were managed by excluding some NHs (e.g., those too difficult to access) from the sampling frame; a stratified, random sampling methodology yielded a final sample of 53 NHs from a population of 2,774. In testing the adequacy of representation of the study population, chi-square tests for goodness of fit generated nonsignificant results for distribution by distance from major city and type of organization. A significant result for state/territory was expected and was easily corrected for by the application of weights. The current article provides recommendations for conducting high-quality, probability-based samples and stresses the importance of testing the representativeness of achieved samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we revisit the problem of the hedging of contingent claim using mean-square criterion. We prove that in incomplete market, some probability measure can be identified so that becomes -martingale under .This is in fact a new proposition on the martingale representation theorem. The new results also identify a weight function that serves to be an approximation to the Radon-Nikodým derivative of the unique neutral martingale measure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large data sets of radiocarbon dates are becoming a more common feature of archaeological research. The sheer numbers of radiocarbon dates produced, however, raise issues of representation and interpretation. This paper presents a methodology which both reduces the visible impact of dating fluctuations, but also takes into consideration the influence of the underlying radiocarbon calibration curve. By doing so, it may be possible to distinguish between periods of human activity in early medieval Ireland and the statistical tails produced by radiocarbon calibration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work is an attempt to explain particle production in the early univese. We argue that nonzero values of the stress-energy tensor evaluated in squeezed vacuum state can be due to particle production and this supports the concept of particle production from zero-point quantum fluctuations. In the present calculation we use the squeezed coherent state introduced by Fan and Xiao [7]. The vacuum expectation values of stressenergy tensor defined prior to any dynamics in the background gravitational field give all information about particle production. Squeezing of the vacuum is achieved by means of the background gravitational field, which plays the role of a parametric amplifier [8]. The present calculation shows that the vacuum expectation value of the energy density and pressure contain terms in addition to the classical zero-point energy terms. The calculation of the particle production probability shows that the probability increases as the squeezing parameter increases, reaches a maximum value, and then decreases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work is an attempt to explain particle production in the early univese. We argue that nonzero values of the stress-energy tensor evaluated in squeezed vacuum state can be due to particle production and this supports the concept of particle production from zero-point quantum fluctuations. In the present calculation we use the squeezed coherent state introduced by Fan and Xiao [7]. The vacuum expectation values of stressenergy tensor defined prior to any dynamics in the background gravitational field give all information about particle production. Squeezing of the vacuum is achieved by means of the background gravitational field, which plays the role of a parametric amplifier [8]. The present calculation shows that the vacuum expectation value of the energy density and pressure contain terms in addition to the classical zero-point energy terms. The calculation of the particle production probability shows that the probability increases as the squeezing parameter increases, reaches a maximum value, and then decreases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article shows how one can formulate the representation problem starting from Bayes’ theorem. The purpose of this article is to raise awareness of the formal solutions,so that approximations can be placed in a proper context. The representation errors appear in the likelihood, and the different possibilities for the representation of reality in model and observations are discussed, including nonlinear representation probability density functions. Specifically, the assumptions needed in the usual procedure to add a representation error covariance to the error covariance of the observations are discussed,and it is shown that, when several sub-grid observations are present, their mean still has a representation error ; socalled ‘superobbing’ does not resolve the issue. Connection is made to the off-line or on-line retrieval problem, providing a new simple proof of the equivalence of assimilating linear retrievals and original observations. Furthermore, it is shown how nonlinear retrievals can be assimilated without loss of information. Finally we discuss how errors in the observation operator model can be treated consistently in the Bayesian framework, connecting to previous work in this area.