998 resultados para Quantum algorithms
Resumo:
Visual inputs to artificial and biological visual systems are often quantized: cameras accumulate photons from the visual world, and the brain receives action potentials from visual sensory neurons. Collecting more information quanta leads to a longer acquisition time and better performance. In many visual tasks, collecting a small number of quanta is sufficient to solve the task well. The ability to determine the right number of quanta is pivotal in situations where visual information is costly to obtain, such as photon-starved or time-critical environments. In these situations, conventional vision systems that always collect a fixed and large amount of information are infeasible. I develop a framework that judiciously determines the number of information quanta to observe based on the cost of observation and the requirement for accuracy. The framework implements the optimal speed versus accuracy tradeoff when two assumptions are met, namely that the task is fully specified probabilistically and constant over time. I also extend the framework to address scenarios that violate the assumptions. I deploy the framework to three recognition tasks: visual search (where both assumptions are satisfied), scotopic visual recognition (where the model is not specified), and visual discrimination with unknown stimulus onset (where the model is dynamic over time). Scotopic classification experiments suggest that the framework leads to dramatic improvement in photon-efficiency compared to conventional computer vision algorithms. Human psychophysics experiments confirmed that the framework provides a parsimonious and versatile explanation for human behavior under time pressure in both static and dynamic environments.
Resumo:
One of the main practical implications of quantum mechanical theory is quantum computing, and therefore the quantum computer. Quantum computing (for example, with Shor’s algorithm) challenges the computational hardness assumptions, such as the factoring problem and the discrete logarithm problem, that anchor the safety of cryptosystems. So the scientific community is studying how to defend cryptography; there are two defense strategies: the quantum cryptography (which involves the use of quantum cryptographic algorithms on quantum computers) and the post-quantum cryptography (based on classical cryptographic algorithms, but resistant to quantum computers). For example, National Institute of Standards and Technology (NIST) is collecting and standardizing the post-quantum ciphers, as it established DES and AES as symmetric cipher standards, in the past. In this thesis an introduction on quantum mechanics was given, in order to be able to talk about quantum computing and to analyze Shor’s algorithm. The differences between quantum and post-quantum cryptography were then analyzed. Subsequently the focus was given to the mathematical problems assumed to be resistant to quantum computers. To conclude, post-quantum digital signature cryptographic algorithms selected by NIST were studied and compared in order to apply them in today’s life.
Resumo:
In the last few years there has been a great development of techniques like quantum computers and quantum communication systems, due to their huge potentialities and the growing number of applications. However, physical qubits experience a lot of nonidealities, like measurement errors and decoherence, that generate failures in the quantum computation. This work shows how it is possible to exploit concepts from classical information in order to realize quantum error-correcting codes, adding some redundancy qubits. In particular, the threshold theorem states that it is possible to lower the percentage of failures in the decoding at will, if the physical error rate is below a given accuracy threshold. The focus will be on codes belonging to the family of the topological codes, like toric, planar and XZZX surface codes. Firstly, they will be compared from a theoretical point of view, in order to show their advantages and disadvantages. The algorithms behind the minimum perfect matching decoder, the most popular for such codes, will be presented. The last section will be dedicated to the analysis of the performances of these topological codes with different error channel models, showing interesting results. In particular, while the error correction capability of surface codes decreases in presence of biased errors, XZZX codes own some intrinsic symmetries that allow them to improve their performances if one kind of error occurs more frequently than the others.
Resumo:
This chapter provides a short review of quantum dots (QDs) physics, applications, and perspectives. The main advantage of QDs over bulk semiconductors is the fact that the size became a control parameter to tailor the optical properties of new materials. Size changes the confinement energy which alters the optical properties of the material, such as absorption, refractive index, and emission bands. Therefore, by using QDs one can make several kinds of optical devices. One of these devices transforms electrons into photons to apply them as active optical components in illumination and displays. Other devices enable the transformation of photons into electrons to produce QDs solar cells or photodetectors. At the biomedical interface, the application of QDs, which is the most important aspect in this book, is based on fluorescence, which essentially transforms photons into photons of different wavelengths. This chapter introduces important parameters for QDs' biophotonic applications such as photostability, excitation and emission profiles, and quantum efficiency. We also present the perspectives for the use of QDs in fluorescence lifetime imaging (FLIM) and Förster resonance energy transfer (FRET), so useful in modern microscopy, and how to take advantage of the usually unwanted blinking effect to perform super-resolution microscopy.
Resumo:
Fluorescence Correlation Spectroscopy (FCS) is an optical technique that allows the measurement of the diffusion coefficient of molecules in a diluted sample. From the diffusion coefficient it is possible to calculate the hydrodynamic radius of the molecules. For colloidal quantum dots (QDs) the hydrodynamic radius is valuable information to study interactions with other molecules or other QDs. In this chapter we describe the main aspects of the technique and how to use it to calculate the hydrodynamic radius of quantum dots (QDs).
Resumo:
Atomic charge transfer-counter polarization effects determine most of the infrared fundamental CH intensities of simple hydrocarbons, methane, ethylene, ethane, propyne, cyclopropane and allene. The quantum theory of atoms in molecules/charge-charge flux-dipole flux model predicted the values of 30 CH intensities ranging from 0 to 123 km mol(-1) with a root mean square (rms) error of only 4.2 km mol(-1) without including a specific equilibrium atomic charge term. Sums of the contributions from terms involving charge flux and/or dipole flux averaged 20.3 km mol(-1), about ten times larger than the average charge contribution of 2.0 km mol(-1). The only notable exceptions are the CH stretching and bending intensities of acetylene and two of the propyne vibrations for hydrogens bound to sp hybridized carbon atoms. Calculations were carried out at four quantum levels, MP2/6-311++G(3d,3p), MP2/cc-pVTZ, QCISD/6-311++G(3d,3p) and QCISD/cc-pVTZ. The results calculated at the QCISD level are the most accurate among the four with root mean square errors of 4.7 and 5.0 km mol(-1) for the 6-311++G(3d,3p) and cc-pVTZ basis sets. These values are close to the estimated aggregate experimental error of the hydrocarbon intensities, 4.0 km mol(-1). The atomic charge transfer-counter polarization effect is much larger than the charge effect for the results of all four quantum levels. Charge transfer-counter polarization effects are expected to also be important in vibrations of more polar molecules for which equilibrium charge contributions can be large.
Biased Random-key Genetic Algorithms For The Winner Determination Problem In Combinatorial Auctions.
Resumo:
Abstract In this paper, we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer-linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.
Resumo:
Condensation processes are of key importance in nature and play a fundamental role in chemistry and physics. Owing to size effects at the nanoscale, it is conceptually desired to experimentally probe the dependence of condensate structure on the number of constituents one by one. Here we present an approach to study a condensation process atom-by-atom with the scanning tunnelling microscope, which provides a direct real-space access with atomic precision to the aggregates formed in atomically defined 'quantum boxes'. Our analysis reveals the subtle interplay of competing directional and nondirectional interactions in the emergence of structure and provides unprecedented input for the structural comparison with quantum mechanical models. This approach focuses on-but is not limited to-the model case of xenon condensation and goes significantly beyond the well-established statistical size analysis of clusters in atomic or molecular beams by mass spectrometry.
Resumo:
One of the most important properties of quantum dots (QDs) is their size. Their size will determine optical properties and in a colloidal medium their range of interaction. The most common techniques used to measure QD size are transmission electron microscopy (TEM) and X-ray diffraction. However, these techniques demand the sample to be dried and under a vacuum. This way any hydrodynamic information is excluded and the preparation process may alter even the size of the QDs. Fluorescence correlation spectroscopy (FCS) is an optical technique with single molecule sensitivity capable of extracting the hydrodynamic radius (HR) of the QDs. The main drawback of FCS is the blinking phenomenon that alters the correlation function implicating in a QD apparent size smaller than it really is. In this work, we developed a method to exclude blinking of the FCS and measured the HR of colloidal QDs. We compared our results with TEM images, and the HR obtained by FCS is higher than the radius measured by TEM. We attribute this difference to the cap layer of the QD that cannot be seen in the TEM images.
Resumo:
This work reports the photophysical properties (excitation and fluorescence spectra, fluorescence quantum yield, fluorescence lifetimes) of the poly(2,7-9,9'-dihexylfluorene-dyil) in dilute solutions of four solvents (toluene, tetrahydrofuran, chloroform and ethyl acetate) as well as the properties in solid state. Photoluminescence showed spectra characteristic of disordered α-backbone chain conformation. Simulation of the electronic absorption spectra of oligomers containing 1 to 11 mers showed that the critical conjugation length is between 6 and 7 mers. We also estimated the theoretical dipole moments which indicated that a coil conformation is formed with 8 repeating units per turn. We also showed that some energy transfer process appears in solid state which decreases the emission lifetime. Furthermore, based on luminescent response of the systems herein studied and electroluminescent behavior reported on literature, both photo and electroluminescence emissions arise from the same emissive units.
Resumo:
The existence of a classical limit describing the interacting particles in a second-quantized theory of identical particles with bosonic symmetry is proved. This limit exists in addition to the previously established classical limit with a classical field behavior, showing that the limit h -> 0 of the theory is not unique. An analogous result is valid for a free massive scalar field: two distinct classical limits are proved to exist, describing a system of particles or a classical field. The introduction of local operators in order to represent kinematical properties of interest is shown to break the permutation symmetry under some localizability conditions, allowing the study of individual particle properties.
Resumo:
We show that the one-loop effective action at finite temperature for a scalar field with quartic interaction has the same renormalized expression as at zero temperature if written in terms of a certain classical field phi(c), and if we trade free propagators at zero temperature for their finite-temperature counterparts. The result follows if we write the partition function as an integral over field eigenstates (boundary fields) of the density matrix element in the functional Schrodinger field representation, and perform a semiclassical expansion in two steps: first, we integrate around the saddle point for fixed boundary fields, which is the classical field phi(c), a functional of the boundary fields; then, we perform a saddle-point integration over the boundary fields, whose correlations characterize the thermal properties of the system. This procedure provides a dimensionally reduced effective theory for the thermal system. We calculate the two-point correlation as an example.
Resumo:
In the case of quantum wells, the indium segregation leads to complex potential profiles that are hardly considered in the majority of the theoretical models. The authors demonstrated that the split-operator method is useful tool for obtaining the electronic properties in these cases. Particularly, they studied the influence of the indium surface segregation in optical properties of InGaAs/GaAs quantum wells. Photoluminescence measurements were carried out for a set of InGaAs/GaAs quantum wells and compared to the results obtained theoretically via split-operator method, showing a good agreement.
Resumo:
An x-ray diffraction method, based on the excitation of a surface diffracted wave, is described to investigate the capping process of InAs/GaAs (001) quantum dots (QDs). It is sensitive to the tiny misorientation of (111) planes at the surface of the buffer layer on samples with exposed QDs. After capping, the misorientation occurs in the cap-layer lattice faceting the QDs and its magnitude can be as large as 10 degrees depending on the QDs growth rates, probably due to changes in the size and shape of the QDs. A slow strain release process taking place at room temperature has also been observed by monitoring the misorientation angle of the (111) planes.
Resumo:
We report a comprehensive study of weak-localization and electron-electron interaction effects in a GaAs/InGaAs two-dimensional electron system with nearby InAs quantum dots, using measurements of the electrical conductivity with and without magnetic field. Although both the effects introduce temperature dependent corrections to the zero magnetic field conductivity at low temperatures, the magnetic field dependence of conductivity is dominated by the weak-localization correction. We observed that the electron dephasing scattering rate tau(-1)(phi), obtained from the magnetoconductivity data, is enhanced by introducing quantum dots in the structure, as expected, and obeys a linear dependence on the temperature and elastic mean free path, which is against the Fermi-liquid model. (c) 2008 American Institute of Physics. [DOI: 10.1063/1.2996034]