993 resultados para Quantum Computing
Resumo:
We show that deterministic quantum computing with a single bit can determine whether the classical limit of a quantum system is chaotic or integrable using O(N) physical resources, where N is the dimension of the Hilbert space of the system under study. This is a square-root improvement over all known classical procedures. Our study relies strictly on the random matrix conjecture. We also present numerical results for the nonlinear kicked top.
Resumo:
In this paper we investigate the effect of dephasing on proposed quantum gates for the solid-state Kane quantum computing architecture. Using a simple model of the decoherence, we find that the typical error in a controlled-NOT gate is 8.3x10(-5). We also compute the fidelities of Z, X, swap, and controlled Z operations under a variety of dephasing rates. We show that these numerical results are comparable with the error threshold required for fault tolerant quantum computation.
Resumo:
We propose an approach to optical quantum computation in which a deterministic entangling quantum gate may be performed using, on average, a few hundred coherently interacting optical elements (beam splitters, phase shifters, single photon sources, and photodetectors with feedforward). This scheme combines ideas from the optical quantum computing proposal of Knill, Laflamme, and Milburn [Nature (London) 409, 46 (2001)], and the abstract cluster-state model of quantum computation proposed by Raussendorf and Briegel [Phys. Rev. Lett. 86, 5188 (2001)].
Resumo:
We describe an approach for characterizing the process performed by a quantum gate using quantum process tomography, by first modeling the gate in an extended Hilbert space, which includes nonqubit degrees of freedom. To prevent unphysical processes from being predicted, present quantum process tomography procedures incorporate mathematical constraints, which make no assumptions as to the actual physical nature of the system being described. By contrast, the procedure presented here assumes a particular class of physical processes, and enforces physicality by fitting the data to this model. This allows quantum process tomography to be performed using a smaller experimental data set, and produces parameters with a direct physical interpretation. The approach is demonstrated by example of mode matching in an all-optical controlled-NOT gate. The techniques described are general and could be applied to other optical circuits or quantum computing architectures.
Resumo:
Photonic quantum-information processing schemes, such as linear optics quantum computing, and other experiments relying on single-photon interference, inherently require complete photon indistinguishability to enable the desired photonic interactions to take place. Mode-mismatch is the dominant cause of photon distinguishability in optical circuits. Here we study the effects of photon wave-packet shape on tolerance against the effects of mode mismatch in linear optical circuits, and show that Gaussian distributed photons with large bandwidth are optimal. The result is general and holds for arbitrary linear optical circuits, including ones which allow for postselection and classical feed forward. Our findings indicate that some single photon sources, frequently cited for their potential application to quantum-information processing, may in fact be suboptimal for such applications.
Resumo:
Operator quantum error correction is a recently developed theory that provides a generalized and unified framework for active error correction and passive error avoiding schemes. In this Letter, we describe these codes using the stabilizer formalism. This is achieved by adding a gauge group to stabilizer codes that defines an equivalence class between encoded states. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 3 of its 8 stabilizer generators, leading to a simpler decoding procedure and a wider class of logical operations without affecting its essential properties. This opens the path to possible improvements of the error threshold of fault-tolerant quantum computing.
Resumo:
Circuit QED is a promising solid-state quantum computing architecture. It also has excellent potential as a platform for quantum control-especially quantum feedback control-experiments. However, the current scheme for measurement in circuit QED is low efficiency and has low signal-to-noise ratio for single-shot measurements. The low quality of this measurement makes the implementation of feedback difficult, and here we propose two schemes for measurement in circuit QED architectures that can significantly improve signal-to-noise ratio and potentially achieve quantum-limited measurement. Such measurements would enable the implementation of quantum feedback protocols and we illustrate this with a simple entanglement-stabilization scheme.
Resumo:
Photo-detection plays a fundamental role in experimental quantum optics and is of particular importance in the emerging field of linear optics quantum computing. Present theoretical treatment of photo-detectors is highly idealized and fails to consider many important physical effects. We present a physically motivated model for photo-detectors which accommodates for the effects of finite resolution, bandwidth and efficiency, as well as dark counts and dead-time. We apply our model to two simple well-known applications, which illustrates the significance of these characteristics.
Resumo:
In this paper we do a detailed numerical investigation of the fault-tolerant threshold for optical cluster-state quantum computation. Our noise model allows both photon loss and depolarizing noise, as a general proxy for all types of local noise other than photon loss noise. We obtain a threshold region of allowed pairs of values for the two types of noise. Roughly speaking, our results show that scalable optical quantum computing is possible in the combined presence of both noise types, provided that the loss probability is less than 3 X 10(-3) and the depolarization probability is less than 10(-4). Our fault-tolerant protocol involves a number of innovations, including a method for syndrome extraction known as telecorrection, whereby repeated syndrome measurements are guaranteed to agree. This paper is an extended version of Dawson.
Resumo:
In this Letter we numerically investigate the fault-tolerant threshold for optical cluster-state quantum computing. We allow both photon loss noise and depolarizing noise (as a general proxy for all local noise), and obtain a threshold region of allowed pairs of values for the two types of noise. Roughly speaking, our results show that scalable optical quantum computing is possible for photon loss probabilities < 3x10(-3), and for depolarization probabilities < 10(-4).
Resumo:
We present here a new approach to scalable quantum computing - a 'qubus computer' - which realizes qubit measurement and quantum gates through interacting qubits with a quantum communication bus mode. The qubits could be 'static' matter qubits or 'flying' optical qubits, but the scheme we focus on here is particularly suited to matter qubits. There is no requirement for direct interaction between the qubits. Universal two-qubit quantum gates may be effected by schemes which involve measurement of the bus mode, or by schemes where the bus disentangles automatically and no measurement is needed. In effect, the approach integrates together qubit degrees of freedom for computation with quantum continuous variables for communication and interaction.
Resumo:
The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.
Resumo:
To the two classical reversible 1-bit logic gates, i.e. the identity gate (a.k.a. the follower) and the NOT gate (a.k.a. the inverter), we add an extra gate, the square root of NOT. Similarly, we add to the 24 classical reversible 2-bit circuits, both the square root of NOT and the controlled square root of NOT. This leads to a new kind of calculus, situated between classical reversible computing and quantum computing.
Resumo:
International audience
Resumo:
A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon of ‘optimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.