978 resultados para Space representation
Resumo:
We introduce a Gaussian quantum operator representation, using the most general possible multimode Gaussian operator basis. The representation unifies and substantially extends existing phase-space representations of density matrices for Bose systems and also includes generalized squeezed-state and thermal bases. It enables first-principles dynamical or equilibrium calculations in quantum many-body systems, with quantum uncertainties appearing as dynamical objects. Any quadratic Liouville equation for the density operator results in a purely deterministic time evolution. Any cubic or quartic master equation can be treated using stochastic methods.
Resumo:
We present phase-space techniques for the modelling of spontaneous emission in two-level bosonic atoms. The positive-P representation is shown to give a full and complete description within the limits of our model. The Wigner representation, even when truncated at second order, is shown to need a doubling of the phase-space to allow for a positive-definite diffusion matrix in the appropriate Fokker-Planck equation and still fails to agree with the full quantum results of the positive-P representation. We show that quantum statistics and correlations between the ground and excited states affect the dynamics of the emission process, so that it is in general non-exponential. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
We introduce a unified Gaussian quantum operator representation for fermions and bosons. The representation extends existing phase-space methods to Fermi systems as well as the important case of Fermi-Bose mixtures. It enables simulations of the dynamics and thermal equilibrium states of many-body quantum systems from first principles. As an example, we numerically calculate finite-temperature correlation functions for the Fermi Hubbard model, with no evidence of the Fermi sign problem. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
We investigate the quantum many-body dynamics of dissociation of a Bose-Einstein condensate of molecular dimers into pairs of constituent bosonic atoms and analyze the resulting atom-atom correlations. The quantum fields of both the molecules and atoms are simulated from first principles in three dimensions using the positive-P representation method. This allows us to provide an exact treatment of the molecular field depletion and s-wave scattering interactions between the particles, as well as to extend the analysis to nonuniform systems. In the simplest uniform case, we find that the major source of atom-atom decorrelation is atom-atom recombination which produces molecules outside the initially occupied condensate mode. The unwanted molecules are formed from dissociated atom pairs with nonopposite momenta. The net effect of this process-which becomes increasingly significant for dissociation durations corresponding to more than about 40% conversion-is to reduce the atom-atom correlations. In addition, for nonuniform systems we find that mode mixing due to inhomogeneity can result in further degradation of the correlation signal. We characterize the correlation strength via the degree of squeezing of particle number-difference fluctuations in a certain momentum-space volume and show that the correlation strength can be increased if the signals are binned into larger counting volumes.
Resumo:
Conventionally, document classification researches focus on improving the learning capabilities of classifiers. Nevertheless, according to our observation, the effectiveness of classification is limited by the suitability of document representation. Intuitively, the more features that are used in representation, the more comprehensive that documents are represented. However, if a representation contains too many irrelevant features, the classifier would suffer from not only the curse of high dimensionality, but also overfitting. To address this problem of suitableness of document representations, we present a classifier-independent approach to measure the effectiveness of document representations. Our approach utilises a labelled document corpus to estimate the distribution of documents in the feature space. By looking through documents in this way, we can clearly identify the contributions made by different features toward the document classification. Some experiments have been performed to show how the effectiveness is evaluated. Our approach can be used as a tool to assist feature selection, dimensionality reduction and document classification.
Resumo:
The generation of very short range forecasts of precipitation in the 0-6 h time window is traditionally referred to as nowcasting. Most existing nowcasting systems essentially extrapolate radar observations in some manner, however, very few systems account for the uncertainties involved. Thus deterministic forecast are produced, which have a limited use when decisions must be made, since they have no measure of confidence or spread of the forecast. This paper develops a Bayesian state space modelling framework for quantitative precipitation nowcasting which is probabilistic from conception. The model treats the observations (radar) as noisy realisations of the underlying true precipitation process, recognising that this process can never be completely known, and thus must be represented probabilistically. In the model presented here the dynamics of the precipitation are dominated by advection, so this is a probabilistic extrapolation forecast. The model is designed in such a way as to minimise the computational burden, while maintaining a full, joint representation of the probability density function of the precipitation process. The update and evolution equations avoid the need to sample, thus only one model needs be run as opposed to the more traditional ensemble route. It is shown that the model works well on both simulated and real data, but that further work is required before the model can be used operationally. © 2004 Elsevier B.V. All rights reserved.
Resumo:
This thesis describes a novel connectionist machine utilizing induction by a Hilbert hypercube representation. This representation offers a number of distinct advantages which are described. We construct a theoretical and practical learning machine which lies in an area of overlap between three disciplines - neural nets, machine learning and knowledge acquisition - hence it is refered to as a "coalesced" machine. To this unifying aspect is added the various advantages of its orthogonal lattice structure as against less structured nets. We discuss the case for such a fundamental and low level empirical learning tool and the assumptions behind the machine are clearly outlined. Our theory of an orthogonal lattice structure the Hilbert hypercube of an n-dimensional space using a complemented distributed lattice as a basis for supervised learning is derived from first principles on clearly laid out scientific principles. The resulting "subhypercube theory" was implemented in a development machine which was then used to test the theoretical predictions again under strict scientific guidelines. The scope, advantages and limitations of this machine were tested in a series of experiments. Novel and seminal properties of the machine include: the "metrical", deterministic and global nature of its search; complete convergence invariably producing minimum polynomial solutions for both disjuncts and conjuncts even with moderate levels of noise present; a learning engine which is mathematically analysable in depth based upon the "complexity range" of the function concerned; a strong bias towards the simplest possible globally (rather than locally) derived "balanced" explanation of the data; the ability to cope with variables in the network; and new ways of reducing the exponential explosion. Performance issues were addressed and comparative studies with other learning machines indicates that our novel approach has definite value and should be further researched.
Resumo:
This research was conducted at the Space Research and Technology Centre o the European Space Agency at Noordvijk in the Netherlands. ESA is an international organisation that brings together a range of scientists, engineers and managers from 14 European member states. The motivation for the work was to enable decision-makers, in a culturally and technologically diverse organisation, to share information for the purpose of making decisions that are well informed about the risk-related aspects of the situations they seek to address. The research examined the use of decision support system DSS) technology to facilitate decision-making of this type. This involved identifying the technology available and its application to risk management. Decision-making is a complex activity that does not lend itself to exact measurement or precise understanding at a detailed level. In view of this, a prototype DSS was developed through which to understand the practical issues to be accommodated and to evaluate alternative approaches to supporting decision-making of this type. The problem of measuring the effect upon the quality of decisions has been approached through expert evaluation of the software developed. The practical orientation of this work was informed by a review of the relevant literature in decision-making, risk management, decision support and information technology. Communication and information technology unite the major the,es of this work. This allows correlation of the interests of the research with European public policy. The principles of communication were also considered in the topic of information visualisation - this emerging technology exploits flexible modes of human computer interaction (HCI) to improve the cognition of complex data. Risk management is itself an area characterised by complexity and risk visualisation is advocated for application in this field of endeavour. The thesis provides recommendations for future work in the fields of decision=making, DSS technology and risk management.
Resumo:
Non-uniform B-spline dictionaries on a compact interval are discussed in the context of sparse signal representation. For each given partition, dictionaries of B-spline functions for the corresponding spline space are built up by dividing the partition into subpartitions and joining together the bases for the concomitant subspaces. The resulting slightly redundant dictionaries are composed of B-spline functions of broader support than those corresponding to the B-spline basis for the identical space. Such dictionaries are meant to assist in the construction of adaptive sparse signal representation through a combination of stepwise optimal greedy techniques.
Resumo:
Web document cluster analysis plays an important role in information retrieval by organizing large amounts of documents into a small number of meaningful clusters. Traditional web document clustering is based on the Vector Space Model (VSM), which takes into account only two-level (document and term) knowledge granularity but ignores the bridging paragraph granularity. However, this two-level granularity may lead to unsatisfactory clustering results with “false correlation”. In order to deal with the problem, a Hierarchical Representation Model with Multi-granularity (HRMM), which consists of five-layer representation of data and a twophase clustering process is proposed based on granular computing and article structure theory. To deal with the zero-valued similarity problemresulted from the sparse term-paragraphmatrix, an ontology based strategy and a tolerance-rough-set based strategy are introduced into HRMM. By using granular computing, structural knowledge hidden in documents can be more efficiently and effectively captured in HRMM and thus web document clusters with higher quality can be generated. Extensive experiments show that HRMM, HRMM with tolerancerough-set strategy, and HRMM with ontology all outperform VSM and a representative non VSM-based algorithm, WFP, significantly in terms of the F-Score.
Resumo:
In this paper, we propose a text mining method called LRD (latent relation discovery), which extends the traditional vector space model of document representation in order to improve information retrieval (IR) on documents and document clustering. Our LRD method extracts terms and entities, such as person, organization, or project names, and discovers relationships between them by taking into account their co-occurrence in textual corpora. Given a target entity, LRD discovers other entities closely related to the target effectively and efficiently. With respect to such relatedness, a measure of relation strength between entities is defined. LRD uses relation strength to enhance the vector space model, and uses the enhanced vector space model for query based IR on documents and clustering documents in order to discover complex relationships among terms and entities. Our experiments on a standard dataset for query based IR shows that our LRD method performed significantly better than traditional vector space model and other five standard statistical methods for vector expansion.
Resumo:
We define Picard cycles on each smooth three-sheeted Galois cover C of the Riemann sphere. The moduli space of all these algebraic curves is a nice Shimura surface, namely a symmetric quotient of the projective plane uniformized by the complex two-dimensional unit ball. We show that all Picard cycles on C form a simple orbit of the Picard modular group of Eisenstein numbers. The proof uses a special surface classification in connection with the uniformization of a classical Picard-Fuchs system. It yields an explicit symplectic representation of the braid groups (coloured or not) of four strings.
Resumo:
A novel framework for modelling biomolecular systems at multiple scales in space and time simultaneously is described. The atomistic molecular dynamics representation is smoothly connected with a statistical continuum hydrodynamics description. The system behaves correctly at the limits of pure molecular dynamics (hydrodynamics) and at the intermediate regimes when the atoms move partly as atomistic particles, and at the same time follow the hydrodynamic flows. The corresponding contributions are controlled by a parameter, which is defined as an arbitrary function of space and time, thus, allowing an effective separation of the atomistic 'core' and continuum 'environment'. To fill the scale gap between the atomistic and the continuum representations our special purpose computer for molecular dynamics, MDGRAPE-4, as well as GPU-based computing were used for developing the framework. These hardware developments also include interactive molecular dynamics simulations that allow intervention of the modelling through force-feedback devices.
Resumo:
2000 Mathematics Subject Classification: 26A33 (primary), 35S15 (secondary)
Resumo:
2000 Mathematics Subject Classification: 26A33 (primary), 35S15