853 resultados para Many-fermion systems
Resumo:
We investigate quantum many-body systems where all low-energy states are entangled. As a tool for quantifying such systems, we introduce the concept of the entanglement gap, which is the difference in energy between the ground-state energy and the minimum energy that a separable (unentangled) state may attain. If the energy of the system lies within the entanglement gap, the state of the system is guaranteed to be entangled. We find Hamiltonians that have the largest possible entanglement gap; for a system consisting of two interacting spin-1/2 subsystems, the Heisenberg antiferromagnet is one such example. We also introduce a related concept, the entanglement-gap temperature: the temperature below which the thermal state is certainly entangled, as witnessed by its energy. We give an example of a bipartite Hamiltonian with an arbitrarily high entanglement-gap temperature for fixed total energy range. For bipartite spin lattices we prove a theorem demonstrating that the entanglement gap necessarily decreases as the coordination number is increased. We investigate frustrated lattices and quantum phase transitions as physical phenomena that affect the entanglement gap.
Resumo:
We show how to efficiently simulate a quantum many-body system with tree structure when its entanglement (Schmidt number) is small for any bipartite split along an edge of the tree. As an application, we show that any one-way quantum computation on a tree graph can be efficiently simulated with a classical computer.
Resumo:
The interplay between temperature and q-deformation in the phase transition properties of many-body systems is studied in the particular framework of the collective q-deformed fermionic Lipkin model. It is shown that in phase transitions occuring in many-fermion systems described by su(2)q-like models are strongly influenced by the q-deformation.
Resumo:
Many-body systems of composite hadrons are characterized by processes that involve the simultaneous presence of hadrons and their constituents. We briefly review several methods that have been devised to study such systems and present a novel method that is based on the ideas of mapping between physical and ideal Fock spaces. The method, known as the Fock-Tani representation, was invented years ago in the context of atomic physics problems and was recently extended to hadronic physics. Starting with the Fock-space representation of single-hadron states, a change of representation is implemented by a unitary transformation such that composites are redescribed by elementary Bose and Fermi field operators in an extended Fock space. When the unitary transformation is applied to the microscopic quark Hamiltonian, effective, Hermitian Hamiltonians with a clear physical interpretation are obtained. The use of the method in connection with the linked-cluster formalism to describe short-range correlations and quark deconfinement effects in nuclear matter is discussed. As an application of the method, an effective nucleon-nucleon interaction is derived from a constituent quark model and used to obtain the equation of state of nuclear matter in the Hartree-Fock approximation.
Resumo:
Many Enterprise Systems (ES) projects have reported nil or detrimental impacts despite the substantial investment in the system. Having expected positive outcomes for the organization and its functions through the weighty spend, the effective management of ES-related knowledge has been suggested as a critical success factor for these ES projects in ES implementations. This paper suggests theoretical views purporting the importance of understanding on knowledge management for ES success. To explain the complex, dynamic and multifaceted of knowledge management, we adopt the concepts in Learning Network Theory. We then conceptualized the impact of knowledge management on ES by analyzing five case studies in several industries in India, based on the Knowledge-based Theory of the Firm that captures the performance of the system.
Resumo:
In this paper we give an overview of some very recent work, as well as presenting a new approach, on the stochastic simulation of multi-scaled systems involving chemical reactions. In many biological systems (such as genetic regulation and cellular dynamics) there is a mix between small numbers of key regulatory proteins, and medium and large numbers of molecules. In addition, it is important to be able to follow the trajectories of individual molecules by taking proper account of the randomness inherent in such a system. We describe different types of simulation techniques (including the stochastic simulation algorithm, Poisson Runge-Kutta methods and the balanced Euler method) for treating simulations in the three different reaction regimes: slow, medium and fast. We then review some recent techniques on the treatment of coupled slow and fast reactions for stochastic chemical kinetics and present a new approach which couples the three regimes mentioned above. We then apply this approach to a biologically inspired problem involving the expression and activity of LacZ and LacY proteins in E coli, and conclude with a discussion on the significance of this work. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
User profiling is the process of constructing user models which represent personal characteristics and preferences of customers. User profiles play a central role in many recommender systems. Recommender systems recommend items to users based on user profiles, in which the items can be any objects which the users are interested in, such as documents, web pages, books, movies, etc. In recent years, multidimensional data are getting more and more attention for creating better recommender systems from both academia and industry. Additional metadata provides algorithms with more details for better understanding the interactions between users and items. However, most of the existing user/item profiling techniques for multidimensional data analyze data through splitting the multidimensional relations, which causes information loss of the multidimensionality. In this paper, we propose a user profiling approach using a tensor reduction algorithm, which we will show is based on a Tucker2 model. The proposed profiling approach incorporates latent interactions between all dimensions into user profiles, which significantly benefits the quality of neighborhood formation. We further propose to integrate the profiling approach into neighborhoodbased collaborative filtering recommender algorithms. Experimental results show significant improvements in terms of recommendation accuracy.
Resumo:
To remain competitive, many agricultural systems are now being run along business lines. Systems methodologies are being incorporated, and here evolutionary computation is a valuable tool for identifying more profitable or sustainable solutions. However, agricultural models typically pose some of the more challenging problems for optimisation. This chapter outlines these problems, and then presents a series of three case studies demonstrating how they can be overcome in practice. Firstly, increasingly complex models of Australian livestock enterprises show that evolutionary computation is the only viable optimisation method for these large and difficult problems. On-going research is taking a notably efficient and robust variant, differential evolution, out into real-world systems. Next, models of cropping systems in Australia demonstrate the challenge of dealing with competing objectives, namely maximising farm profit whilst minimising resource degradation. Pareto methods are used to illustrate this trade-off, and these results have proved to be most useful for farm managers in this industry. Finally, land-use planning in the Netherlands demonstrates the size and spatial complexity of real-world problems. Here, GIS-based optimisation techniques are integrated with Pareto methods, producing better solutions which were acceptable to the competing organizations. These three studies all show that evolutionary computation remains the only feasible method for the optimisation of large, complex agricultural problems. An extra benefit is that the resultant population of candidate solutions illustrates trade-offs, and this leads to more informed discussions and better education of the industry decision-makers.
Resumo:
A feature common to many adaptive systems for identification and control is the adjustment.of gain parameters in a manner ensuring the stability of the overall system. This paper puts forward a principle which assures such a result for arbitrary systems which are linear and time invariant except for the adjustable parameters. The principle only demands that a transfer function be positive real. This transfer function dependent on the structure of the system with respect to the parameters. Several examples from adaptive identification, control and observer schemes are given as illustrations of the conceptual simplification provided by the structural principle.
Resumo:
We present methods for fixed-lag smoothing using Sequential Importance sampling (SIS) on a discrete non-linear, non-Gaussian state space system with unknown parameters. Our particular application is in the field of digital communication systems. Each input data point is taken from a finite set of symbols. We represent transmission media as a fixed filter with a finite impulse response (FIR), hence a discrete state-space system is formed. Conventional Markov chain Monte Carlo (MCMC) techniques such as the Gibbs sampler are unsuitable for this task because they can only perform processing on a batch of data. Data arrives sequentially, so it would seem sensible to process it in this way. In addition, many communication systems are interactive, so there is a maximum level of latency that can be tolerated before a symbol is decoded. We will demonstrate this method by simulation and compare its performance to existing techniques.
Resumo:
In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled
Resumo:
We address the presence of nondistillable (bound) entanglement in natural many-body systems. In particular, we consider standard harmonic and spin-1/2 chains, at thermal equilibrium and characterized by few interaction parameters. The existence of bound entanglement is addressed by calculating explicitly the negativity of entanglement for different partitions. This allows us to individuate a range of temperatures for which no entanglement can be distilled by means of local operations, despite the system being globally entangled. We discuss how the appearance of bound entanglement can be linked to entanglement-area laws, typical of these systems. Various types of interactions are explored, showing that the presence of bound entanglement is an intrinsic feature of these systems. In the harmonic case, we analytically prove that thermal bound entanglement persists for systems composed by an arbitrary number of particles. Our results strongly suggest the existence of bound entangled states in the macroscopic limit also for spin-1/2 systems.
Resumo:
Does bound entanglement naturally appear in quantum many-body systems? We address this question by showing the existence of bound-entangled thermal states for harmonic oscillator systems consisting of an arbitrary number of particles. By explicit calculations of the negativity for different partitions, we find a range of temperatures for which no entanglement can be distilled by means of local operations, despite the system being globally entangled. We offer an interpretation of this result in terms of entanglement-area laws, typical of these systems. Finally, we discuss generalizations of this result to other systems, including spin chains.
Resumo:
We consider the ground-state entanglement in highly connected many-body systems consisting of harmonic oscillators and spin-1/2 systems. Varying their degree of connectivity, we investigate the interplay between the enhancement of entanglement, due to connections, and its frustration, due to monogamy constraints. Remarkably, we see that in many situations the degree of entanglement in a highly connected system is essentially of the same order as in a low connected one. We also identify instances in which the entanglement decreases as the degree of connectivity increases.
Resumo:
Many-core platforms are an emerging technology in the real-time embedded domain. These devices offer various options for power savings, cost reductions and contribute to the overall system flexibility, however, issues such as unpredictability, scalability and analysis pessimism are serious challenges to their integration into the aforementioned area. The focus of this work is on many-core platforms using a limited migrative model (LMM). LMM is an approach based on the fundamental concepts of the multi-kernel paradigm, which is a promising step towards scalable and predictable many-cores. In this work, we formulate the problem of real-time application mapping on a many-core platform using LMM, and propose a three-stage method to solve it. An extended version of the existing analysis is used to assure that derived mappings (i) guarantee the fulfilment of timing constraints posed on worst-case communication delays of individual applications, and (ii) provide an environment to perform load balancing for e.g. energy/thermal management, fault tolerance and/or performance reasons.