965 resultados para Quantum computational complexity
Resumo:
Many important problems in communication networks, transportation networks, and logistics networks are solved by the minimization of cost functions. In general, these can be complex optimization problems involving many variables. However, physicists noted that in a network, a node variable (such as the amount of resources of the nodes) is connected to a set of link variables (such as the flow connecting the node), and similarly each link variable is connected to a number of (usually two) node variables. This enables one to break the problem into local components, often arriving at distributive algorithms to solve the problems. Compared with centralized algorithms, distributed algorithms have the advantages of lower computational complexity, and lower communication overhead. Since they have a faster response to local changes of the environment, they are especially useful for networks with evolving conditions. This review will cover message-passing algorithms in applications such as resource allocation, transportation networks, facility location, traffic routing, and stability of power grids.
Resumo:
The Dirichlet process mixture model (DPMM) is a ubiquitous, flexible Bayesian nonparametric statistical model. However, full probabilistic inference in this model is analytically intractable, so that computationally intensive techniques such as Gibbs sampling are required. As a result, DPMM-based methods, which have considerable potential, are restricted to applications in which computational resources and time for inference is plentiful. For example, they would not be practical for digital signal processing on embedded hardware, where computational resources are at a serious premium. Here, we develop a simplified yet statistically rigorous approximate maximum a-posteriori (MAP) inference algorithm for DPMMs. This algorithm is as simple as DP-means clustering, solves the MAP problem as well as Gibbs sampling, while requiring only a fraction of the computational effort. (For freely available code that implements the MAP-DP algorithm for Gaussian mixtures see http://www.maxlittle.net/.) Unlike related small variance asymptotics (SVA), our method is non-degenerate and so inherits the “rich get richer” property of the Dirichlet process. It also retains a non-degenerate closed-form likelihood which enables out-of-sample calculations and the use of standard tools such as cross-validation. We illustrate the benefits of our algorithm on a range of examples and contrast it to variational, SVA and sampling approaches from both a computational complexity perspective as well as in terms of clustering performance. We demonstrate the wide applicabiity of our approach by presenting an approximate MAP inference method for the infinite hidden Markov model whose performance contrasts favorably with a recently proposed hybrid SVA approach. Similarly, we show how our algorithm can applied to a semiparametric mixed-effects regression model where the random effects distribution is modelled using an infinite mixture model, as used in longitudinal progression modelling in population health science. Finally, we propose directions for future research on approximate MAP inference in Bayesian nonparametrics.
Resumo:
Using Macaulay's correspondence we study the family of Artinian Gorenstein local algebras with fixed symmetric Hilbert function decomposition. As an application we give a new lower bound for the dimension of cactus varieties of the third Veronese embedding. We discuss the case of cubic surfaces, where interesting phenomena occur.
Resumo:
In recent years, global supply chains have increasingly suffered from reliability issues due to various external and difficult to-manage events. The following paper aims to build an integrated approach for the design of a Supply Chain under the risk of disruption and demand fluctuation. The study is divided in two parts: a mathematical optimization model, to identify the optimal design and assignments customer-facility, and a discrete-events simulation of the resulting network. The first one describes a model in which plant location decisions are influenced by variables such as distance to customers, investments needed to open plants and centralization phenomena that help contain the risk of demand variability (Risk Pooling). The entire model has been built with a proactive approach to manage the risk of disruptions assigning to each customer two types of open facilities: one that will serve it under normal conditions and a back-up facility, which comes into operation when the main facility has failed. The study is conducted on a relatively small number of instances due to the computational complexity, a matheuristic approach can be found in part A of the paper to evaluate the problem with a larger set of players. Once the network is built, a discrete events Supply Chain simulation (SCS) has been implemented to analyze the stock flow within the facilities warehouses, the actual impact of disruptions and the role of the back-up facilities which suffer a great stress on their inventory due to a large increase in demand caused by the disruptions. Therefore, simulation follows a reactive approach, in which customers are redistributed among facilities according to the interruptions that may occur in the system and to the assignments deriving from the design model. Lastly, the most important results of the study will be reported, analyzing the role of lead time in a reactive approach for the occurrence of disruptions and comparing the two models in terms of costs.
Resumo:
A computational model of observation in quantum mechanics is presented. The model provides a clean and simple computational paradigm which can be used to illustrate and possibly explain some of the unintuitive and unexpected behavior of some quantum mechanical systems. As examples, the model is used to simulate three seminal quantum mechanical experiments. The results obtained agree with the predictions of quantum mechanics (and physical measurements), yet the model is perfectly deterministic and maintains a notion of locality.
Resumo:
In the past decades, all of the efforts at quantifying systems complexity with a general tool has usually relied on using Shannon's classical information framework to address the disorder of the system through the Boltzmann-Gibbs-Shannon entropy, or one of its extensions. However, in recent years, there were some attempts to tackle the quantification of algorithmic complexities in quantum systems based on the Kolmogorov algorithmic complexity, obtaining some discrepant results against the classical approach. Therefore, an approach to the complexity measure is proposed here, using the quantum information formalism, taking advantage of the generality of the classical-based complexities, and being capable of expressing these systems' complexity on other framework than its algorithmic counterparts. To do so, the Shiner-Davison-Landsberg (SDL) complexity framework is considered jointly with linear entropy for the density operators representing the analyzed systems formalism along with the tangle for the entanglement measure. The proposed measure is then applied in a family of maximally entangled mixed state.
Resumo:
We give a simple proof of a formula for the minimal time required to simulate a two-qubit unitary operation using a fixed two-qubit Hamiltonian together with fast local unitaries. We also note that a related lower bound holds for arbitrary n-qubit gates.
Resumo:
Novel molecular complexity measures are designed based on the quantum molecular kinematics. The Hamiltonian matrix constructed in a quasi-topological approximation describes the temporal evolution of the modelled electronic system and determined the time derivatives for the dynamic quantities. This allows to define the average quantum kinematic characteristics closely related to the curvatures of the electron paths, particularly, the torsion reflecting the chirality of the dynamic system. A special attention has been given to the computational scheme for this chirality measure. The calculations on realistic molecular systems demonstrate reasonable behaviour of the proposed molecular complexity indices.
Resumo:
The mapping, exact or approximate, of a many-body problem onto an effective single-body problem is one of the most widely used conceptual and computational tools of physics. Here, we propose and investigate the inverse map of effective approximate single-particle equations onto the corresponding many-particle system. This approach allows us to understand which interacting system a given single-particle approximation is actually describing, and how far this is from the original physical many-body system. We illustrate the resulting reverse engineering process by means of the Kohn-Sham equations of density-functional theory. In this application, our procedure sheds light on the nonlocality of the density-potential mapping of density-functional theory, and on the self-interaction error inherent in approximate density functionals.
Resumo:
Time-averaged conformations of (+/-)-1-[3,4-(methylenedioxy)phenyl]-2-methylaminopropane hydrochloride (MDMA, ""ecstasy"") in D(2)O, and of its free base and trifluoroacetate in CDCl(3), were deduced from their (1)H NMR spectra and used to calculate their conformer distribution. Their rotational potential energy surface (PES) was calculated at the RHF/6-31G(d,p), 133LYP/6-31G(d,p), B3LYP/cc-pVDZ and AM1 levels. Solvent effects were evaluated using the polarizable continuum model. The NMR and theoretical studies showed that, in the free base, the N-methyl group and the ring are preferentially trans. This preference is stronger in the salts and corresponds to the X-ray structure of the hydrochloride. However, the energy barriers separating these forms are very low. The X-ray diffraction crystal structures of the anhydrous salt and its monohydrate differed mainly in the trans or cis relationship of the N-methyl group to the a-methyl, although these two forms interconvert freely in solution. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
In this and a preceding paper, we provide an introduction to the Fujitsu VPP range of vector-parallel supercomputers and to some of the computational chemistry software available for the VPP. Here, we consider the implementation and performance of seven popular chemistry application packages. The codes discussed range from classical molecular dynamics to semiempirical and ab initio quantum chemistry. All have evolved from sequential codes, and have typically been parallelised using a replicated data approach. As such they are well suited to the large-memory/fast-processor architecture of the VPP. For one code, CASTEP, a distributed-memory data-driven parallelisation scheme is presented. (C) 2000 Published by Elsevier Science B.V. All rights reserved.
Resumo:
The main problem with current approaches to quantum computing is the difficulty of establishing and maintaining entanglement. A Topological Quantum Computer (TQC) aims to overcome this by using different physical processes that are topological in nature and which are less susceptible to disturbance by the environment. In a (2+1)-dimensional system, pseudoparticles called anyons have statistics that fall somewhere between bosons and fermions. The exchange of two anyons, an effect called braiding from knot theory, can occur in two different ways. The quantum states corresponding to the two elementary braids constitute a two-state system allowing the definition of a computational basis. Quantum gates can be built up from patterns of braids and for quantum computing it is essential that the operator describing the braiding-the R-matrix-be described by a unitary operator. The physics of anyonic systems is governed by quantum groups, in particular the quasi-triangular Hopf algebras obtained from finite groups by the application of the Drinfeld quantum double construction. Their representation theory has been described in detail by Gould and Tsohantjis, and in this review article we relate the work of Gould to TQC schemes, particularly that of Kauffman.
Resumo:
The brain is a complex system that, in the normal condition, has emergent properties like those associated with activity-dependent plasticity in learning and memory, and in pathological situations, manifests abnormal long-term phenomena like the epilepsies. Data from our laboratory and from the literature were classified qualitatively as sources of complexity and emergent properties from behavior to electrophysiological, cellular, molecular, and computational levels. We used such models as brainstem-dependent acute audiogenic seizures and forebrain-dependent kindled audiogenic seizures. Additionally we used chemical OF electrical experimental models of temporal lobe epilepsy that induce status epilepticus with behavioral, anatomical, and molecular sequelae such as spontaneous recurrent seizures and long-term plastic changes. Current Computational neuroscience tools will help the interpretation. storage, and sharing of the exponential growth of information derived from those studies. These strategies are considered solutions to deal with the complexity of brain pathologies such as the epilepsies. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Intervalley interference between degenerate conduction band minima has been shown to lead to oscillations in the exchange energy between neighboring phosphorus donor electron states in silicon [B. Koiller, X. Hu, and S. Das Sarma, Phys. Rev. Lett. 88, 027903 (2002); Phys. Rev. B 66, 115201 (2002)]. These same effects lead to an extreme sensitivity of the exchange energy on the relative orientation of the donor atoms, an issue of crucial importance in the construction of silicon-based spin quantum computers. In this article we calculate the donor electron exchange coupling as a function of donor position incorporating the full Bloch structure of the Kohn-Luttinger electron wave functions. It is found that due to the rapidly oscillating nature of the terms they produce, the periodic part of the Bloch functions can be safely ignored in the Heitler-London integrals as was done by Koiller, Hu, and Das Sarma, significantly reducing the complexity of calculations. We address issues of fabrication and calculate the expected exchange coupling between neighboring donors that have been implanted into the silicon substrate using an 15 keV ion beam in the so-called top down fabrication scheme for a Kane solid-state quantum computer. In addition, we calculate the exchange coupling as a function of the voltage bias on control gates used to manipulate the electron wave functions and implement quantum logic operations in the Kane proposal, and find that these gate biases can be used to both increase and decrease the magnitude of the exchange coupling between neighboring donor electrons. The zero-bias results reconfirm those previously obtained by Koiller, Hu, and Das Sarma.
Resumo:
Power system planning, control and operation require an adequate use of existing resources as to increase system efficiency. The use of optimal solutions in power systems allows huge savings stressing the need of adequate optimization and control methods. These must be able to solve the envisaged optimization problems in time scales compatible with operational requirements. Power systems are complex, uncertain and changing environments that make the use of traditional optimization methodologies impracticable in most real situations. Computational intelligence methods present good characteristics to address this kind of problems and have already proved to be efficient for very diverse power system optimization problems. Evolutionary computation, fuzzy systems, swarm intelligence, artificial immune systems, neural networks, and hybrid approaches are presently seen as the most adequate methodologies to address several planning, control and operation problems in power systems. Future power systems, with intensive use of distributed generation and electricity market liberalization increase power systems complexity and bring huge challenges to the forefront of the power industry. Decentralized intelligence and decision making requires more effective optimization and control techniques techniques so that the involved players can make the most adequate use of existing resources in the new context. The application of computational intelligence methods to deal with several problems of future power systems is presented in this chapter. Four different applications are presented to illustrate the promises of computational intelligence, and illustrate their potentials.