886 resultados para Discrete Gaussian Sampling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We discuss the expectation propagation (EP) algorithm for approximate Bayesian inference using a factorizing posterior approximation. For neural network models, we use a central limit theorem argument to make EP tractable when the number of parameters is large. For two types of models, we show that EP can achieve optimal generalization performance when data are drawn from a simple distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Morphological and physiological caste differences were compared from colonies of Dolichovespula maculata in middle and late phases of the colony cycle. The females showed three patterns of ovarian development and only females classified as queens were inseminated. In both phases, queens were larger than workers for most measures. Discriminant analyses showed high distinction of caste in both phases. We also found highly pronounced qualitative differences: workers had hairs covering the entire body whereas queens had no hair and also some colour differences in the gaster. These results indicate that D. maculata presents pre-imaginal differentiation as seen in other Vespinae, and that size variation occurs from colony to colony such that queens of one colony may be comparable to workers of a different colony although the castes are always distinguishable within colonies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dynamic response of dry masonry columns can be approximated with finite-difference equations. Continuum models follow by replacing the difference quotients of the discrete model by corresponding differential expressions. The mathematically simplest of these models is a one-dimensional Cosserat theory. Within the presented homogenization context, the Cosserat theory is obtained by making ad hoc assumptions regarding the relative importance of certain terms in the differential expansions. The quality of approximation of the various theories is tested by comparison of the dispersion relations for bending waves with the dispersion relation of the discrete theory. All theories coincide with differences of less than 1% for wave-length-block-height (L/h) ratios bigger than 2 pi. The theory based on systematic differential approximation remains accurate up to L/h = 3 and then diverges rapidly. The Cosserat model becomes increasingly inaccurate for L/h < 2 pi. However, in contrast to the systematic approximation, the wave speed remains finite. In conclusion, considering its relative simplicity, the Cosserat model appears to be the natural starting point for the development of continuum models for blocky structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using the method of quantum trajectories we show that a known pure state can be optimally monitored through time when subject to a sequence of discrete measurements. By modifying the way that we extract information from the measurement apparatus we can minimize the average algorithmic information of the measurement record, without changing the unconditional evolution of the measured system. We define an optimal measurement scheme as one which has the lowest average algorithmic information allowed. We also show how it is possible to extract information about system operator averages from the measurement records and their probabilities. The optimal measurement scheme, in the limit of weak coupling, determines the statistics of the variance of the measured variable directly. We discuss the relevance of such measurements for recent experiments in quantum optics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we study the possible microscopic origin of heavy-tailed probability density distributions for the price variation of financial instruments. We extend the standard log-normal process to include another random component in the so-called stochastic volatility models. We study these models under an assumption, akin to the Born-Oppenheimer approximation, in which the volatility has already relaxed to its equilibrium distribution and acts as a background to the evolution of the price process. In this approximation, we show that all models of stochastic volatility should exhibit a scaling relation in the time lag of zero-drift modified log-returns. We verify that the Dow-Jones Industrial Average index indeed follows this scaling. We then focus on two popular stochastic volatility models, the Heston and Hull-White models. In particular, we show that in the Hull-White model the resulting probability distribution of log-returns in this approximation corresponds to the Tsallis (t-Student) distribution. The Tsallis parameters are given in terms of the microscopic stochastic volatility model. Finally, we show that the log-returns for 30 years Dow Jones index data is well fitted by a Tsallis distribution, obtaining the relevant parameters. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A variable that appears to affect preference development is the exposure to a variety of options. Providing opportunities for systematically sampling different options is one procedure that can facilitate the development of preference, which is indicated by the consistency of selections. The purpose of this study was to evaluate the effects of providing sampling opportunities on the preference development for two adults with severe disabilities. Opportunities for sampling a variety of drink items were presented, followed by choice opportunities for selections at the site where sampling occurred and at a non-sampling site (a grocery store). Results show that the participants developed a definite response consistency in selections at both sites. Implications for sampling practices are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the physical significance of fidelity as a measure of similarity for Gaussian states by drawing a comparison with its classical counterpart. We find that the relationship between these classical and quantum fidelities is not straightforward, and in general does not seem to provide insight into the physical significance of quantum fidelity. To avoid this ambiguity we propose that the efficacy of quantum information protocols be characterized by determining their transfer function and then calculating the fidelity achievable for a hypothetical pure reference input state. (c) 2007 Optical Society of America.