15 resultados para Bose-Einstein condensation statistical model
em Aston University Research Archive
Resumo:
In this paper, we present a theoretical study of a Bose-Einstein condensate of interacting bosons in a quartic trap in one, two, and three dimensions. Using Thomas-Fermi approximation, suitably complemented by numerical solutions of the Gross-Pitaevskii equation, we study the ground sate condensate density profiles, the chemical potential, the effects of cross-terms in the quartic potential, temporal evolution of various energy components of the condensate, and width oscillations of the condensate. Results obtained are compared with corresponding results for a bose condensate in a harmonic confinement.
Resumo:
In this letter, we propose an analytical approach to model uplink intercell interference (ICI) in hexagonal grid based orthogonal frequency division multiple access (OFMDA) cellular networks. The key idea is that the uplink ICI from individual cells is approximated with a lognormal distribution with statistical parameters being determined analytically. Accordingly, the aggregated uplink ICI is approximated with another lognormal distribution and its statistical parameters can be determined from those of individual cells using Fenton-Wilkson method. Analytic expressions of uplink ICI are derived with two traditional frequency reuse schemes, namely integer frequency reuse schemes with factor 1 (IFR-1) and factor 3 (IFR-3). Uplink fractional power control and lognormal shadowing are modeled. System performances in terms of signal to interference plus noise ratio (SINR) and spectrum efficiency are also derived. The proposed model has been validated by simulations. © 2013 IEEE.
Resumo:
We present a novel approach for the optical manipulation of neutral atoms in annular light structures produced by the phenomenon of conical refraction occurring in biaxial optical crystals. For a beam focused to a plane behind the crystal, the focal plane exhibits two concentric bright rings enclosing a ring of null intensity called the Poggendorff ring. We demonstrate both theoretically and experimentally that the Poggendorff dark ring of conical refraction is confined in three dimensions by regions of higher intensity. We derive the positions of the confining intensity maxima and minima and discuss the application of the Poggendorff ring for trapping ultra-cold atoms using the repulsive dipole force of blue-detuned light. We give analytical expressions for the trapping frequencies and potential depths along both the radial and the axial directions. Finally, we present realistic numerical simulations of the dynamics of a 87Rb Bose-Einstein condensate trapped inside the Poggendorff ring which are in good agreement with corresponding experimental results.
Resumo:
Aggregation and caking of particles are common severe problems in many operations and processing of granular materials, where granulated sugar is an important example. Prevention of aggregation and caking of granular materials requires a good understanding of moisture migration and caking mechanisms. In this paper, the modeling of solid bridge formation between particles is introduced, based on moisture migration of atmospheric moisture into containers packed with granular materials through vapor evaporation and condensation. A model for the caking process is then developed, based on the growth of liquid bridges (during condensation), and their hardening and subsequent creation of solid bridges (during evaporation). The predicted caking strengths agree well with some available experimental data on granulated sugar under storage conditions.
Resumo:
The aim of this study was to determine the cues used to signal avoidance of difficult driving situations and to test the hypothesis that drivers with relatively poor high contrast visual acuity (HCVA) have fewer crashes than drivers with relatively poor normalised low contrast visual acuity (NLCVA). This is because those with poorer HCVA are well aware of their difficulties and avoid dangerous driving situations while those poorer NLCVA are often unaware of the extent of their problem. Age, self-reported situation avoidance and HCVA were collected during a practice based study of 690 drivers. Screening was also carried out on 7254 drivers at various venues, mainly motorway sites, throughout the UK. Age, self-reported situation avoidance and prior crash involvement were recorded and Titmus vision screeners were used to measure HCVA and NLCVA. Situation avoidance increased in reduced visibility conditions and was influenced by age and HCVA. Only half of the drivers used visual cues to signal situation avoidance and most of these drivers used high rather than low contrast cues. A statistical model designed to remove confounding interrelationships between variables showed, for drivers that did not report situation avoidance, that crash involvement decreased for drivers with below average HCVA and increased for those with below average NLCVA. These relationships accounted for less than 1% of the crash variance, so the hypothesis was not strongly supported. © 2002 The College of Optometrists.
Resumo:
Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered. © 2002 The College of Optometrists.
Resumo:
A visualization plot of a data set of molecular data is a useful tool for gaining insight into a set of molecules. In chemoinformatics, most visualization plots are of molecular descriptors, and the statistical model most often used to produce a visualization is principal component analysis (PCA). This paper takes PCA, together with four other statistical models (NeuroScale, GTM, LTM, and LTM-LIN), and evaluates their ability to produce clustering in visualizations not of molecular descriptors but of molecular fingerprints. Two different tasks are addressed: understanding structural information (particularly combinatorial libraries) and relating structure to activity. The quality of the visualizations is compared both subjectively (by visual inspection) and objectively (with global distance comparisons and local k-nearest-neighbor predictors). On the data sets used to evaluate clustering by structure, LTM is found to perform significantly better than the other models. In particular, the clusters in LTM visualization space are consistent with the relationships between the core scaffolds that define the combinatorial sublibraries. On the data sets used to evaluate clustering by activity, LTM again gives the best performance but by a smaller margin. The results of this paper demonstrate the value of using both a nonlinear projection map and a Bernoulli noise model for modeling binary data.
Resumo:
The target of no-reference (NR) image quality assessment (IQA) is to establish a computational model to predict the visual quality of an image. The existing prominent method is based on natural scene statistics (NSS). It uses the joint and marginal distributions of wavelet coefficients for IQA. However, this method is only applicable to JPEG2000 compressed images. Since the wavelet transform fails to capture the directional information of images, an improved NSS model is established by contourlets. In this paper, the contourlet transform is utilized to NSS of images, and then the relationship of contourlet coefficients is represented by the joint distribution. The statistics of contourlet coefficients are applicable to indicate variation of image quality. In addition, an image-dependent threshold is adopted to reduce the effect of content to the statistical model. Finally, image quality can be evaluated by combining the extracted features in each subband nonlinearly. Our algorithm is trained and tested on the LIVE database II. Experimental results demonstrate that the proposed algorithm is superior to the conventional NSS model and can be applied to different distortions. © 2009 Elsevier B.V. All rights reserved.
Resumo:
We provide a theoretical explanation of the results on the intensity distributions and correlation functions obtained from a random-beam speckle field in nonlinear bulk waveguides reported in the recent publication by Bromberg et al. [Nat. Photonics 4, 721 (2010) ].. We study both the focusing and defocusing cases and in the limit of small speckle size (short-correlated disordered beam) provide analytical asymptotes for the intensity probability distributions at the output facet. Additionally we provide a simple relation between the speckle sizes at the input and output of a focusing nonlinear waveguide. The results are of practical significance for nonlinear Hanbury Brown and Twiss interferometry in both optical waveguides and Bose-Einstein condensates. © 2012 American Physical Society.
Resumo:
We present the essential features of the dissipative parametric instability, in the universal complex Ginzburg- Landau equation. Dissipative parametric instability is excited through a parametric modulation of frequency dependent losses in a zig-zag fashion in the spectral domain. Such damping is introduced respectively for spectral components in the +ΔF and in the -ΔF region in alternating fashion, where F can represent wavenumber or temporal frequency depending on the applications. Such a spectral modulation can destabilize the homogeneous stationary solution of the system leading to growth of spectral sidebands and to the consequent pattern formation: both stable and unstable patterns in one- and in two-dimensional systems can be excited. The dissipative parametric instability provides an useful and interesting tool for the control of pattern formation in nonlinear optical systems with potentially interesting applications in technological applications, like the design of mode- locked lasers emitting pulse trains with tunable repetition rate; but it could also find realizations in nanophotonics circuits or in dissipative polaritonic Bose-Einstein condensates.
Resumo:
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.
Resumo:
The Dirichlet process mixture model (DPMM) is a ubiquitous, flexible Bayesian nonparametric statistical model. However, full probabilistic inference in this model is analytically intractable, so that computationally intensive techniques such as Gibbs sampling are required. As a result, DPMM-based methods, which have considerable potential, are restricted to applications in which computational resources and time for inference is plentiful. For example, they would not be practical for digital signal processing on embedded hardware, where computational resources are at a serious premium. Here, we develop a simplified yet statistically rigorous approximate maximum a-posteriori (MAP) inference algorithm for DPMMs. This algorithm is as simple as DP-means clustering, solves the MAP problem as well as Gibbs sampling, while requiring only a fraction of the computational effort. (For freely available code that implements the MAP-DP algorithm for Gaussian mixtures see http://www.maxlittle.net/.) Unlike related small variance asymptotics (SVA), our method is non-degenerate and so inherits the “rich get richer” property of the Dirichlet process. It also retains a non-degenerate closed-form likelihood which enables out-of-sample calculations and the use of standard tools such as cross-validation. We illustrate the benefits of our algorithm on a range of examples and contrast it to variational, SVA and sampling approaches from both a computational complexity perspective as well as in terms of clustering performance. We demonstrate the wide applicabiity of our approach by presenting an approximate MAP inference method for the infinite hidden Markov model whose performance contrasts favorably with a recently proposed hybrid SVA approach. Similarly, we show how our algorithm can applied to a semiparametric mixed-effects regression model where the random effects distribution is modelled using an infinite mixture model, as used in longitudinal progression modelling in population health science. Finally, we propose directions for future research on approximate MAP inference in Bayesian nonparametrics.
Resumo:
Information systems have developed to the stage that there is plenty of data available in most organisations but there are still major problems in turning that data into information for management decision making. This thesis argues that the link between decision support information and transaction processing data should be through a common object model which reflects the real world of the organisation and encompasses the artefacts of the information system. The CORD (Collections, Objects, Roles and Domains) model is developed which is richer in appropriate modelling abstractions than current Object Models. A flexible Object Prototyping tool based on a Semantic Data Storage Manager has been developed which enables a variety of models to be stored and experimented with. A statistical summary table model COST (Collections of Objects Statistical Table) has been developed within CORD and is shown to be adequate to meet the modelling needs of Decision Support and Executive Information Systems. The COST model is supported by a statistical table creator and editor COSTed which is also built on top of the Object Prototyper and uses the CORD model to manage its metadata.
Resumo:
We modify a nonlinear σ model (NLσM) for the description of a granular disordered system in the presence of both the Coulomb repulsion and the Cooper pairing. We show that under certain controlled approximations the action of this model is reduced to the Ambegaokar-Eckern-Schön (AES) action, which is further reduced to the Bose-Hubbard (or “dirty-boson”) model with renormalized coupling constants. We obtain an effective action which is more general than the AES one but still simpler than the full NLσM action. This action can be applied in the region of parameters where the reduction to the AES or the Bose-Hubbard model is not justified. This action may lead to a different picture of the superconductor-insulator transition in two-dimensional systems.
Resumo:
The paper presents the simulation of the pyrolysis vapors condensation process using an Eulerian approach. The condensable volatiles produced by the fast pyrolysis of biomass in a 100 g/h bubbling fluidized bed reactor are condensed in a water cooled condenser. The vapors enter the condenser at 500 °C, and the water temperature is 15 °C. The properties of the vapor phase are calculated according to the mole fraction of its individual compounds. The saturated vapor pressure is calculated for the vapor mixture using a corresponding states correlation and assuming that the mixture of the condensable compounds behave as a pure fluid. Fluent 6.3 has been used as the simulation platform, while the condensation model has been incorporated to the main code using an external user defined function. © 2011 American Chemical Society.