993 resultados para Quantum Space Complexity
Resumo:
In this paper we extend recent results of Fiorini et al. on the extension complexity of the cut polytope and related polyhedra. We first describe a lifting argument to show exponential extension complexity for a number of NP-complete problems including subset-sum and three dimensional matching. We then obtain a relationship between the extension complexity of the cut polytope of a graph and that of its graph minors. Using this we are able to show exponential extension complexity for the cut polytope of a large number of graphs, including those used in quantum information and suspensions of cubic planar graphs.
Resumo:
Una detallada descripción de la dinámica de bajas energías del entrelazamiento multipartito es proporcionada para sistemas armónicos en una gran variedad de escenarios disipativos. Sin hacer ninguna aproximación central, esta descripción yace principalmente sobre un conjunto razonable de hipótesis acerca del entorno e interacción entorno-sistema, ambas consistente con un análisis lineal de la dinámica disipativa. En la primera parte se deriva un criterio de inseparabilidad capaz de detectar el entrelazamiento k-partito de una extensa clase de estados gausianos y no-gausianos en sistemas de variable continua. Este criterio se emplea para monitorizar la dinámica transitiva del entrelazamiento, mostrando que los estados no-gausianos pueden ser tan robustos frente a los efectos disipativos como los gausianos. Especial atención se dedicada a la dinámica estacionaria del entrelazamiento entre tres osciladores interaccionando con el mismo entorno o diferentes entornos a distintas temperaturas. Este estudio contribuye a dilucidar el papel de las correlaciones cuánticas en el comportamiento de la corrientes energéticas.
Resumo:
In this work we consider several instances of the following problem: "how complicated can the isomorphism relation for countable models be?"' Using the Borel reducibility framework, we investigate this question with regard to the space of countable models of particular complete first-order theories. We also investigate to what extent this complexity is mirrored in the number of back-and-forth inequivalent models of the theory. We consider this question for two large and related classes of theories. First, we consider o-minimal theories, showing that if T is o-minimal, then the isomorphism relation is either Borel complete or Borel. Further, if it is Borel, we characterize exactly which values can occur, and when they occur. In all cases Borel completeness implies lambda-Borel completeness for all lambda. Second, we consider colored linear orders, which are (complete theories of) a linear order expanded by countably many unary predicates. We discover the same characterization as with o-minimal theories, taking the same values, with the exception that all finite values are possible except two. We characterize exactly when each possibility occurs, which is similar to the o-minimal case. Additionally, we extend Schirrman's theorem, showing that if the language is finite, then T is countably categorical or Borel complete. As before, in all cases Borel completeness implies lambda-Borel completeness for all lambda.
Resumo:
We construct parent Hamiltonians involving only local 2-body interactions for a broad class of projected entangled pair states (PEPS). Making use of perturbation gadget techniques, we define a perturbative Hamiltonian acting on the virtual PEPS space with a finite order low energy effective Hamiltonian that is a gapped, frustration-free parent Hamiltonian for an encoded version of a desired PEPS. For topologically ordered PEPS, the ground space of the low energy effective Hamiltonian is shown to be in the same phase as the desired state to all orders of perturbation theory. An encoded parent Hamiltonian for the double semion string net ground state is explicitly constructed as a concrete example.
Decoherence models for discrete-time quantum walks and their application to neutral atom experiments
Resumo:
We discuss decoherence in discrete-time quantum walks in terms of a phenomenological model that distinguishes spin and spatial decoherence. We identify the dominating mechanisms that affect quantum-walk experiments realized with neutral atoms walking in an optical lattice. From the measured spatial distributions, we determine with good precision the amount of decoherence per step, which provides a quantitative indication of the quality of our quantum walks. In particular, we find that spin decoherence is the main mechanism responsible for the loss of coherence in our experiment. We also find that the sole observation of ballistic-instead of diffusive-expansion in position space is not a good indicator of the range of coherent delocalization. We provide further physical insight by distinguishing the effects of short- and long-time spin dephasing mechanisms. We introduce the concept of coherence length in the discrete-time quantum walk, which quantifies the range of spatial coherences. Unexpectedly, we find that quasi-stationary dephasing does not modify the local properties of the quantum walk, but instead affects spatial coherences. For a visual representation of decoherence phenomena in phase space, we have developed a formalism based on a discrete analogue of the Wigner function. We show that the effects of spin and spatial decoherence differ dramatically in momentum space.
Resumo:
Finding equilibration times is a major unsolved problem in physics with few analytical results. Here we look at equilibration times for quantum gases of bosons and fermions in the regime of negligibly weak interactions, a setting which not only includes paradigmatic systems such as gases confined to boxes, but also Luttinger liquids and the free superfluid Hubbard model. To do this, we focus on two classes of measurements: (i) coarse-grained observables, such as the number of particles in a region of space, and (ii) few-mode measurements, such as phase correlators.Weshow that, in this setting, equilibration occurs quite generally despite the fact that the particles are not interacting. Furthermore, for coarse-grained measurements the timescale is generally at most polynomial in the number of particles N, which is much faster than previous general upper bounds, which were exponential in N. For local measurements on lattice systems, the timescale is typically linear in the number of lattice sites. In fact, for one-dimensional lattices, the scaling is generally linear in the length of the lattice, which is optimal. Additionally, we look at a few specific examples, one of which consists ofNfermions initially confined on one side of a partition in a box. The partition is removed and the fermions equilibrate extremely quickly in time O(1 N).
Resumo:
The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.
Resumo:
While fault-tolerant quantum computation might still be years away, analog quantum simulators offer a way to leverage current quantum technologies to study classically intractable quantum systems. Cutting edge quantum simulators such as those utilizing ultracold atoms are beginning to study physics which surpass what is classically tractable. As the system sizes of these quantum simulators increase, there are also concurrent gains in the complexity and types of Hamiltonians which can be simulated. In this work, I describe advances toward the realization of an adaptable, tunable quantum simulator capable of surpassing classical computation. We simulate long-ranged Ising and XY spin models which can have global arbitrary transverse and longitudinal fields in addition to individual transverse fields using a linear chain of up to 24 Yb+ 171 ions confined in a linear rf Paul trap. Each qubit is encoded in the ground state hyperfine levels of an ion. Spin-spin interactions are engineered by the application of spin-dependent forces from laser fields, coupling spin to motion. Each spin can be read independently using state-dependent fluorescence. The results here add yet more tools to an ever growing quantum simulation toolbox. One of many challenges has been the coherent manipulation of individual qubits. By using a surprisingly large fourth-order Stark shifts in a clock-state qubit, we demonstrate an ability to individually manipulate spins and apply independent Hamiltonian terms, greatly increasing the range of quantum simulations which can be implemented. As quantum systems grow beyond the capability of classical numerics, a constant question is how to verify a quantum simulation. Here, I present measurements which may provide useful metrics for large system sizes and demonstrate them in a system of up to 24 ions during a classically intractable simulation. The observed values are consistent with extremely large entangled states, as much as ~95% of the system entangled. Finally, we use many of these techniques in order to generate a spin Hamiltonian which fails to thermalize during experimental time scales due to a meta-stable state which is often called prethermal. The observed prethermal state is a new form of prethermalization which arises due to long-range interactions and open boundary conditions, even in the thermodynamic limit. This prethermalization is observed in a system of up to 22 spins. We expect that system sizes can be extended up to 30 spins with only minor upgrades to the current apparatus. These results emphasize that as the technology improves, the techniques and tools developed here can potentially be used to perform simulations which will surpass the capability of even the most sophisticated classical techniques, enabling the study of a whole new regime of quantum many-body physics.
Resumo:
This study highlights the importance of cognition-affect interaction pathways in the construction of mathematical knowledge. Scientific output demands further research on the conceptual structure underlying such interaction aimed at coping with the high complexity of its interpretation. The paper discusses the effectiveness of using a dynamic model such as that outlined in the Mathematical Working Spaces (MWS) framework, in order to describe the interplay between cognition and affect in the transitions from instrumental to discursive geneses in geometrical reasoning. The results based on empirical data from a teaching experiment at a middle school show that the use of dynamic geometry software favours students’ attitudinal and volitional dimensions and helps them to maintain productive affective pathways, affording greater intellectual independence in mathematical work and interaction with the context that impact learning opportunities in geometric proofs. The reflective and heuristic dimensions of teacher mediation in students’ learning is crucial in the transition from instrumental to discursive genesis and working stability in the Instrumental-Discursive plane of MWS.
Resumo:
Entangled quantum states can be given a separable decomposition if we relax the restriction that the local operators be quantum states. Motivated by the construction of classical simulations and local hidden variable models, we construct `smallest' local sets of operators that achieve this. In other words, given an arbitrary bipartite quantum state we construct convex sets of local operators that allow for a separable decomposition, but that cannot be made smaller while continuing to do so. We then consider two further variants of the problem where the local state spaces are required to contain the local quantum states, and obtain solutions for a variety of cases including a region of pure states around the maximally entangled state. The methods involve calculating certain forms of cross norm. Two of the variants of the problem have a strong relationship to theorems on ensemble decompositions of positive operators, and our results thereby give those theorems an added interpretation. The results generalise those obtained in our previous work on this topic [New J. Phys. 17, 093047 (2015)].
Resumo:
The present manuscript focuses on Lattice Gauge Theories based on finite groups. For the purpose of Quantum Simulation, the Hamiltonian approach is considered, while the finite group serves as a discretization scheme for the degrees of freedom of the gauge fields. Several aspects of these models are studied. First, we investigate dualities in Abelian models with a restricted geometry, using a systematic approach. This leads to a rich phase diagram dependent on the super-selection sectors. Second, we construct a family of lattice Hamiltonians for gauge theories with a finite group, either Abelian or non-Abelian. We show that is possible to express the electric term as a natural graph Laplacian, and that the physical Hilbert space can be explicitly built using spin network states. In both cases we perform numerical simulations in order to establish the correctness of the theoretical results and further investigate the models.
Resumo:
This chapter provides a short review of quantum dots (QDs) physics, applications, and perspectives. The main advantage of QDs over bulk semiconductors is the fact that the size became a control parameter to tailor the optical properties of new materials. Size changes the confinement energy which alters the optical properties of the material, such as absorption, refractive index, and emission bands. Therefore, by using QDs one can make several kinds of optical devices. One of these devices transforms electrons into photons to apply them as active optical components in illumination and displays. Other devices enable the transformation of photons into electrons to produce QDs solar cells or photodetectors. At the biomedical interface, the application of QDs, which is the most important aspect in this book, is based on fluorescence, which essentially transforms photons into photons of different wavelengths. This chapter introduces important parameters for QDs' biophotonic applications such as photostability, excitation and emission profiles, and quantum efficiency. We also present the perspectives for the use of QDs in fluorescence lifetime imaging (FLIM) and Förster resonance energy transfer (FRET), so useful in modern microscopy, and how to take advantage of the usually unwanted blinking effect to perform super-resolution microscopy.
Resumo:
Fluorescence Correlation Spectroscopy (FCS) is an optical technique that allows the measurement of the diffusion coefficient of molecules in a diluted sample. From the diffusion coefficient it is possible to calculate the hydrodynamic radius of the molecules. For colloidal quantum dots (QDs) the hydrodynamic radius is valuable information to study interactions with other molecules or other QDs. In this chapter we describe the main aspects of the technique and how to use it to calculate the hydrodynamic radius of quantum dots (QDs).
Resumo:
Atomic charge transfer-counter polarization effects determine most of the infrared fundamental CH intensities of simple hydrocarbons, methane, ethylene, ethane, propyne, cyclopropane and allene. The quantum theory of atoms in molecules/charge-charge flux-dipole flux model predicted the values of 30 CH intensities ranging from 0 to 123 km mol(-1) with a root mean square (rms) error of only 4.2 km mol(-1) without including a specific equilibrium atomic charge term. Sums of the contributions from terms involving charge flux and/or dipole flux averaged 20.3 km mol(-1), about ten times larger than the average charge contribution of 2.0 km mol(-1). The only notable exceptions are the CH stretching and bending intensities of acetylene and two of the propyne vibrations for hydrogens bound to sp hybridized carbon atoms. Calculations were carried out at four quantum levels, MP2/6-311++G(3d,3p), MP2/cc-pVTZ, QCISD/6-311++G(3d,3p) and QCISD/cc-pVTZ. The results calculated at the QCISD level are the most accurate among the four with root mean square errors of 4.7 and 5.0 km mol(-1) for the 6-311++G(3d,3p) and cc-pVTZ basis sets. These values are close to the estimated aggregate experimental error of the hydrocarbon intensities, 4.0 km mol(-1). The atomic charge transfer-counter polarization effect is much larger than the charge effect for the results of all four quantum levels. Charge transfer-counter polarization effects are expected to also be important in vibrations of more polar molecules for which equilibrium charge contributions can be large.
Resumo:
One of the most important properties of quantum dots (QDs) is their size. Their size will determine optical properties and in a colloidal medium their range of interaction. The most common techniques used to measure QD size are transmission electron microscopy (TEM) and X-ray diffraction. However, these techniques demand the sample to be dried and under a vacuum. This way any hydrodynamic information is excluded and the preparation process may alter even the size of the QDs. Fluorescence correlation spectroscopy (FCS) is an optical technique with single molecule sensitivity capable of extracting the hydrodynamic radius (HR) of the QDs. The main drawback of FCS is the blinking phenomenon that alters the correlation function implicating in a QD apparent size smaller than it really is. In this work, we developed a method to exclude blinking of the FCS and measured the HR of colloidal QDs. We compared our results with TEM images, and the HR obtained by FCS is higher than the radius measured by TEM. We attribute this difference to the cap layer of the QD that cannot be seen in the TEM images.