887 resultados para Non-gaussian statistical mechanics
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
We show that the projected Gross-Pitaevskii equation (PGPE) can be mapped exactly onto Hamilton's equations of motion for classical position and momentum variables. Making use of this mapping, we adapt techniques developed in statistical mechanics to calculate the temperature and chemical potential of a classical Bose field in the microcanonical ensemble. We apply the method to simulations of the PGPE, which can be used to represent the highly occupied modes of Bose condensed gases at finite temperature. The method is rigorous, valid beyond the realms of perturbation theory, and agrees with an earlier method of temperature measurement for the same system. Using this method we show that the critical temperature for condensation in a homogeneous Bose gas on a lattice with a uv cutoff increases with the interaction strength. We discuss how to determine the temperature shift for the Bose gas in the continuum limit using this type of calculation, and obtain a result in agreement with more sophisticated Monte Carlo simulations. We also consider the behavior of the specific heat.
Resumo:
Spatial characterization of non-Gaussian attributes in earth sciences and engineering commonly requires the estimation of their conditional distribution. The indicator and probability kriging approaches of current nonparametric geostatistics provide approximations for estimating conditional distributions. They do not, however, provide results similar to those in the cumbersome implementation of simultaneous cokriging of indicators. This paper presents a new formulation termed successive cokriging of indicators that avoids the classic simultaneous solution and related computational problems, while obtaining equivalent results to the impractical simultaneous solution of cokriging of indicators. A successive minimization of the estimation variance of probability estimates is performed, as additional data are successively included into the estimation process. In addition, the approach leads to an efficient nonparametric simulation algorithm for non-Gaussian random functions based on residual probabilities.
Resumo:
By stochastic modeling of the process of Raman photoassociation of Bose-Einstein condensates, we show that, the farther the initial quantum state is from a coherent state, the farther the one-dimensional predictions are from those of the commonly used zero-dimensional approach. We compare the dynamics of condensates, initially in different quantum states, finding that, even when the quantum prediction for an initial coherent state is relatively close to the Gross-Pitaevskii prediction, an initial Fock state gives qualitatively different predictions. We also show that this difference is not present in a single-mode type of model, but that the quantum statistics assume a more important role as the dimensionality of the model is increased. This contrasting behavior in different dimensions, well known with critical phenomena in statistical mechanics, makes itself plainly visible here in a mesoscopic system and is a strong demonstration of the need to consider physically realistic models of interacting condensates.
Resumo:
We present a tractable theory of transport of simple fluids in cylindrical nanopores, considering trajectories of molecules between diffuse wall collisions at low-density, and including viscous flow contributions at higher densities. The model is validated through molecular dynamics simulations of supercritical methane transport, over a wide range of conditions. We find excellent agreement between model and simulation at low to medium densities. However, at high densities the model tends to over-predict the transport behaviour, due to a large decrease in surface slip that is not well represented by the model. It is also seen that the concept of activated diffusion, commonly associated with diffusion in small pores, is fundamentally invalid for smooth pores.
Resumo:
We investigate an optical scheme to conditionally engineer quantum states using a beam splitter, homodyne detection, and a squeezed vacuum as an ancillar state. This scheme is efficient in producing non-Gaussian quantum states such as squeezed single photons and superpositions of coherent states (SCSs). We show that a SCS with well defined parity and high fidelity can be generated from a Fock state of n
Resumo:
Knowledge of the adsorption behavior of coal-bed gases, mainly under supercritical high-pressure conditions, is important for optimum design of production processes to recover coal-bed methane and to sequester CO2 in coal-beds. Here, we compare the two most rigorous adsorption methods based on the statistical mechanics approach, which are Density Functional Theory (DFT) and Grand Canonical Monte Carlo (GCMC) simulation, for single and binary mixtures of methane and carbon dioxide in slit-shaped pores ranging from around 0.75 to 7.5 nm in width, for pressure up to 300 bar, and temperature range of 308-348 K, as a preliminary study for the CO2 sequestration problem. For single component adsorption, the isotherms generated by DFT, especially for CO2, do not match well with GCMC calculation, and simulation is subsequently pursued here to investigate the binary mixture adsorption. For binary adsorption, upon increase of pressure, the selectivity of carbon dioxide relative to methane in a binary mixture initially increases to a maximum value, and subsequently drops before attaining a constant value at pressures higher than 300 bar. While the selectivity increases with temperature in the initial pressure-sensitive region, the constant high-pressure value is also temperature independent. Optimum selectivity at any temperature is attained at a pressure of 90-100 bar at low bulk mole fraction of CO2, decreasing to approximately 35 bar at high bulk mole fractions. (c) 2005 American Institute of Chemical Engineers.
Resumo:
We describe a generalization of the cluster-state model of quantum computation to continuous-variable systems, along with a proposal for an optical implementation using squeezed-light sources, linear optics, and homodyne detection. For universal quantum computation, a nonlinear element is required. This can be satisfied by adding to the toolbox any single-mode non-Gaussian measurement, while the initial cluster state itself remains Gaussian. Homodyne detection alone suffices to perform an arbitrary multimode Gaussian transformation via the cluster state. We also propose an experiment to demonstrate cluster-based error reduction when implementing Gaussian operations.
Resumo:
A formalism for describing the dynamics of Genetic Algorithms (GAs) using method s from statistical mechanics is applied to the problem of generalization in a perceptron with binary weights. The dynamics are solved for the case where a new batch of training patterns is presented to each population member each generation, which considerably simplifies the calculation. The theory is shown to agree closely to simulations of a real GA averaged over many runs, accurately predicting the mean best solution found. For weak selection and large problem size the difference equations describing the dynamics can be expressed analytically and we find that the effects of noise due to the finite size of each training batch can be removed by increasing the population size appropriately. If this population resizing is used, one can deduce the most computationally efficient size of training batch each generation. For independent patterns this choice also gives the minimum total number of training patterns used. Although using independent patterns is a very inefficient use of training patterns in general, this work may also prove useful for determining the optimum batch size in the case where patterns are recycled.
Resumo:
An adaptive back-propagation algorithm is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework, both numerical studies and a rigorous analysis show that the adaptive back-propagation method results in faster training by breaking the symmetry between hidden units more efficiently and by providing faster convergence to optimal generalization than gradient descent.
Resumo:
The learning properties of a universal approximator, a normalized committee machine with adjustable biases, are studied for on-line back-propagation learning. Within a statistical mechanics framework, numerical studies show that this model has features which do not exist in previously studied two-layer network models without adjustable biases, e.g., attractive suboptimal symmetric phases even for realizable cases and noiseless data.
Resumo:
Neural networks have often been motivated by superficial analogy with biological nervous systems. Recently, however, it has become widely recognised that the effective application of neural networks requires instead a deeper understanding of the theoretical foundations of these models. Insight into neural networks comes from a number of fields including statistical pattern recognition, computational learning theory, statistics, information geometry and statistical mechanics. As an illustration of the importance of understanding the theoretical basis for neural network models, we consider their application to the solution of multi-valued inverse problems. We show how a naive application of the standard least-squares approach can lead to very poor results, and how an appreciation of the underlying statistical goals of the modelling process allows the development of a more general and more powerful formalism which can tackle the problem of multi-modality.
Resumo:
We present a method for determining the globally optimal on-line learning rule for a soft committee machine under a statistical mechanics framework. This rule maximizes the total reduction in generalization error over the whole learning process. A simple example demonstrates that the locally optimal rule, which maximizes the rate of decrease in generalization error, may perform poorly in comparison.
Resumo:
An adaptive back-propagation algorithm parameterized by an inverse temperature 1/T is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework, we analyse these learning algorithms in both the symmetric and the convergence phase for finite learning rates in the case of uncorrelated teachers of similar but arbitrary length T. These analyses show that adaptive back-propagation results generally in faster training by breaking the symmetry between hidden units more efficiently and by providing faster convergence to optimal generalization than gradient descent.
Resumo:
We analyse the dynamics of a number of second order on-line learning algorithms training multi-layer neural networks, using the methods of statistical mechanics. We first consider on-line Newton's method, which is known to provide optimal asymptotic performance. We determine the asymptotic generalization error decay for a soft committee machine, which is shown to compare favourably with the result for standard gradient descent. Matrix momentum provides a practical approximation to this method by allowing an efficient inversion of the Hessian. We consider an idealized matrix momentum algorithm which requires access to the Hessian and find close correspondence with the dynamics of on-line Newton's method. In practice, the Hessian will not be known on-line and we therefore consider matrix momentum using a single example approximation to the Hessian. In this case good asymptotic performance may still be achieved, but the algorithm is now sensitive to parameter choice because of noise in the Hessian estimate. On-line Newton's method is not appropriate during the transient learning phase, since a suboptimal unstable fixed point of the gradient descent dynamics becomes stable for this algorithm. A principled alternative is to use Amari's natural gradient learning algorithm and we show how this method provides a significant reduction in learning time when compared to gradient descent, while retaining the asymptotic performance of on-line Newton's method.