987 resultados para Linear optical quantum computation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Study overviews the basics of TiO2with respect to its structure, properties and applications. A brief account of its structural, electronic and optical properties is provided. Various emerging technological applications utilising TiO2 is also discussed.Till now, exceptionally large number of fundamental studies and application-oriented research and developments has been carried out by many researchers worldwide in TiO2 with its low-dimensional nanomaterial form due to its various novel properties. These nanostructured materials have shown many favourable properties for potential applications, including pollutant photocatalytic decomposition, photovoltaic cells, sensors and so on. This thesis aims to make an in-depth investigation on different linear and nonlinear optical and structural characteristics of different phases of TiO2. Correspondingly, extensive challenges to synthesise different high quality TiO2 nanostructure derivatives such as nanotubes, nanospheres, nanoflowers etc. are continuing. Here, different nanostructures of anatase TiO2 were synthesised and analysed. Morphologically different nanostructures were found to have different impact on their physical and electronic properties such as varied surface area, dissimilar quantum confinement and hence diverged suitability for different applications. In view of the advantages of TiO2, it can act as an excellent matrix for nanoparticle composite films. These composite films may lead to several advantageous functional optical characteristics. Detailed investigations of these kinds of nanocomposites were also performed, only to find that these nanocomposites showed higher adeptness than their parent material. Fine tuning of these parameters helps researchers to achieve high proficiency in their respective applications. These innumerable opportunities aims to encompass the new progress in studies related to TiO2 for an efficient utilization in photo-catalytic or photo-voltaic applications under visible light, accentuate the future trends of TiO2-research in the environment as well as energy related fields serving promising applications benefitting the mankind. The last section of the thesis discusses the applicability of analysed nanomaterials for dye sensitised solar cells followed by future suggestions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scope of this work is the fundamental growth, tailoring and characterization of self-organized indium arsenide quantum dots (QDs) and their exploitation as active region for diode lasers emitting in the 1.55 µm range. This wavelength regime is especially interesting for long-haul telecommunications as optical fibers made from silica glass have the lowest optical absorption. Molecular Beam Epitaxy is utilized as fabrication technique for the quantum dots and laser structures. The results presented in this thesis depict the first experimental work for which this reactor was used at the University of Kassel. Most research in the field of self-organized quantum dots has been conducted in the InAs/GaAs material system. It can be seen as the model system of self-organized quantum dots, but is not suitable for the targeted emission wavelength. Light emission from this system at 1.55 µm is hard to accomplish. To stay as close as possible to existing processing technology, the In(AlGa)As/InP (100) material system is deployed. Depending on the epitaxial growth technique and growth parameters this system has the drawback of producing a wide range of nano species besides quantum dots. Best known are the elongated quantum dashes (QDash). Such structures are preferentially formed, if InAs is deposited on InP. This is related to the low lattice-mismatch of 3.2 %, which is less than half of the value in the InAs/GaAs system. The task of creating round-shaped and uniform QDs is rendered more complex considering exchange effects of arsenic and phosphorus as well as anisotropic effects on the surface that do not need to be dealt with in the InAs/GaAs case. While QDash structures haven been studied fundamentally as well as in laser structures, they do not represent the theoretical ideal case of a zero-dimensional material. Creating round-shaped quantum dots on the InP(100) substrate remains a challenging task. Details of the self-organization process are still unknown and the formation of the QDs is not fully understood yet. In the course of the experimental work a novel growth concept was discovered and analyzed that eases the fabrication of QDs. It is based on different crystal growth and ad-atom diffusion processes under supply of different modifications of the arsenic atmosphere in the MBE reactor. The reactor is equipped with special valved cracking effusion cells for arsenic and phosphorus. It represents an all-solid source configuration that does not rely on toxic gas supply. The cracking effusion cell are able to create different species of arsenic and phosphorus. This constitutes the basis of the growth concept. With this method round-shaped QD ensembles with superior optical properties and record-low photoluminescence linewidth were achieved. By systematically varying the growth parameters and working out a detailed analysis of the experimental data a range of parameter values, for which the formation of QDs is favored, was found. A qualitative explanation of the formation characteristics based on the surface migration of In ad-atoms is developed. Such tailored QDs are finally implemented as active region in a self-designed diode laser structure. A basic characterization of the static and temperature-dependent properties was carried out. The QD lasers exceed a reference quantum well laser in terms of inversion conditions and temperature-dependent characteristics. Pulsed output powers of several hundred milli watt were measured at room temperature. In particular, the lasers feature a high modal gain that even allowed cw-emission at room temperature of a processed ridge wave guide device as short as 340 µm with output powers of 17 mW. Modulation experiments performed at the Israel Institute of Technology (Technion) showed a complex behavior of the QDs in the laser cavity. Despite the fact that the laser structure is not fully optimized for a high-speed device, data transmission capabilities of 15 Gb/s combined with low noise were achieved. To the best of the author`s knowledge, this renders the lasers the fastest QD devices operating at 1.55 µm. The thesis starts with an introductory chapter that pronounces the advantages of optical fiber communication in general. Chapter 2 will introduce the fundamental knowledge that is necessary to understand the importance of the active region`s dimensions for the performance of a diode laser. The novel growth concept and its experimental analysis are presented in chapter 3. Chapter 4 finally contains the work on diode lasers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the effect of the epitaxial structure and the acceptor doping profile on the efficiency droop in InGaN/GaN LEDs by the physics based simulation of experimental internal quantum efficiency (IQE) characteristics. The device geometry is an integral part of our simulation approach. We demonstrate that even for single quantum well LEDs the droop depends critically on the acceptor doping profile. The Auger recombination was found to increase stronger than with the third power of the carrier density and has been found to dominate the droop in the roll over zone of the IQE. The fitted Auger coefficients are in the range of the values predicted by atomistic simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work investigation of the QDs formation and the fabrication of QD based semiconductor lasers for telecom applications are presented. InAs QDs grown on AlGaInAs lattice matched to InP substrates are used to fabricate lasers operating at 1.55 µm, which is the central wavelength for far distance data transmission. This wavelength is used due to its minimum attenuation in standard glass fibers. The incorporation of QDs in this material system is more complicated in comparison to InAs QDs in the GaAs system. Due to smaller lattice mismatch the formation of circular QDs, elongated QDs and quantum wires is possible. The influence of the different growth conditions, such as the growth temperature, beam equivalent pressure, amount of deposited material on the formation of the QDs is investigated. It was already demonstrated that the formation process of QDs can be changed by the arsenic species. The formation of more round shaped QDs was observed during the growth of QDs with As2, while for As4 dash-like QDs. In this work only As2 was used for the QD growth. Different growth parameters were investigated to optimize the optical properties, like photoluminescence linewidth, and to implement those QD ensembles into laser structures as active medium. By the implementation of those QDs into laser structures a full width at half maximum (FWHM) of 30 meV was achieved. Another part of the research includes the investigation of the influence of the layer design of lasers on its lasing properties. QD lasers were demonstrated with a modal gain of more than 10 cm-1 per QD layer. Another achievement is the large signal modulation with a maximum data rate of 15 Gbit/s. The implementation of optimized QDs in the laser structure allows to increase the modal gain up to 12 cm-1 per QD layer. A reduction of the waveguide layer thickness leads to a shorter transport time of the carriers into the active region and as a result a data rate up to 22 Gbit/s was achieved, which is so far the highest digital modulation rate obtained with any 1.55 µm QD laser. The implementation of etch stop layers into the laser structure provide the possibility to fabricate feedback gratings with well defined geometries for the realization of DFB lasers. These DFB lasers were fabricated by using a combination of dry and wet etching. Single mode operation at 1.55 µm with a high side mode suppression ratio of 50 dB was achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a method for modeling object classes (such as faces) using 2D example images and an algorithm for matching a model to a novel image. The object class models are "learned'' from example images that we call prototypes. In addition to the images, the pixelwise correspondences between a reference prototype and each of the other prototypes must also be provided. Thus a model consists of a linear combination of prototypical shapes and textures. A stochastic gradient descent algorithm is used to match a model to a novel image by minimizing the error between the model and the novel image. Example models are shown as well as example matches to novel images. The robustness of the matching algorithm is also evaluated. The technique can be used for a number of applications including the computation of correspondence between novel images of a certain known class, object recognition, image synthesis and image compression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a technique for the rapid and reliable evaluation of linear-functional output of elliptic partial differential equations with affine parameter dependence. The essential components are (i) rapidly uniformly convergent reduced-basis approximations — Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N (optimally) selected points in parameter space; (ii) a posteriori error estimation — relaxations of the residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs; and (iii) offline/online computational procedures — stratagems that exploit affine parameter dependence to de-couple the generation and projection stages of the approximation process. The operation count for the online stage — in which, given a new parameter value, we calculate the output and associated error bound — depends only on N (typically small) and the parametric complexity of the problem. The method is thus ideally suited to the many-query and real-time contexts. In this paper, based on the technique we develop a robust inverse computational method for very fast solution of inverse problems characterized by parametrized partial differential equations. The essential ideas are in three-fold: first, we apply the technique to the forward problem for the rapid certified evaluation of PDE input-output relations and associated rigorous error bounds; second, we incorporate the reduced-basis approximation and error bounds into the inverse problem formulation; and third, rather than regularize the goodness-of-fit objective, we may instead identify all (or almost all, in the probabilistic sense) system configurations consistent with the available experimental data — well-posedness is reflected in a bounded "possibility region" that furthermore shrinks as the experimental error is decreased.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Es defineix l'expansió general d'operadors com una combinació lineal de projectors i s'exposa la seva aplicació generalitzada al càlcul d'integrals moleculars. Com a exemple numèric, es fa l'aplicació al càlcul d'integrals de repulsió electrònica entre quatre funcions de tipus s centrades en punts diferents, i es mostren tant resultats del càlcul com la definició d'escalat respecte a un valor de referència, que facilitarà el procés d'optimització de l'expansió per uns paràmetres arbitraris. Es donen resultats ajustats al valor exacte

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large scale image mosaicing methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that lowcost Remotely operated vehicles (ROVs) usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predetermined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This thesis presents a set of consistent methods aimed at creating large area image mosaics from optical data obtained during surveys with low-cost underwater vehicles. First, a global alignment method developed within a Feature-based image mosaicing (FIM) framework, where nonlinear minimisation is substituted by two linear steps, is discussed. Then, a simple four-point mosaic rectifying method is proposed to reduce distortions that might occur due to lens distortions, error accumulation and the difficulties of optical imaging in an underwater medium. The topology estimation problem is addressed by means of an augmented state and extended Kalman filter combined framework, aimed at minimising the total number of matching attempts and simultaneously obtaining the best possible trajectory. Potential image pairs are predicted by taking into account the uncertainty in the trajectory. The contribution of matching an image pair is investigated using information theory principles. Lastly, a different solution to the topology estimation problem is proposed in a bundle adjustment framework. Innovative aspects include the use of fast image similarity criterion combined with a Minimum spanning tree (MST) solution, to obtain a tentative topology. This topology is improved by attempting image matching with the pairs for which there is the most overlap evidence. Unlike previous approaches for large-area mosaicing, our framework is able to deal naturally with cases where time-consecutive images cannot be matched successfully, such as completely unordered sets. Finally, the efficiency of the proposed methods is discussed and a comparison made with other state-of-the-art approaches, using a series of challenging datasets in underwater scenarios

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The length and time scales accessible to optical tweezers make them an ideal tool for the examination of colloidal systems. Embedded high-refractive-index tracer particles in an index-matched hard sphere suspension provide 'handles' within the system to investigate the mechanical behaviour. Passive observations of the motion of a single probe particle give information about the linear response behaviour of the system, which can be linked to the macroscopic frequency-dependent viscous and elastic moduli of the suspension. Separate 'dragging' experiments allow observation of a sample's nonlinear response to an applied stress on a particle-by particle basis. Optical force measurements have given new data about the dynamics of phase transitions and particle interactions; an example in this study is the transition from liquid-like to solid-like behaviour, and the emergence of a yield stress and other effects attributable to nearest-neighbour caging effects. The forces needed to break such cages and the frequency of these cage breaking events are investigated in detail for systems close to the glass transition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Matrix isolation IR spectroscopy has been used to study the vacuum pyrolysis of 1,1,3,3-tetramethyldisiloxane (L1), 1,1,3,3,5,5-hexamethyltrisiloxane (L2) and 3H,5H-octamethyltetrasiloxane (L3) at ca. 1000 K in a flow reactor at low pressures. The hydrocarbons CH3, CH4, C2H2, C2H4, and C2H6 were observed as prominent pyrolysis products in all three systems, and amongst the weaker features are bands arising from the methylsilanes Me2SiH2 (for L1 and L2) and Me3SiH (for L3). The fundamental of SiO was also observed very weakly. By use of quantum chemical calculations combined with earlier kinetic models, mechanisms have been proposed involving the intermediacy of silanones Me2Si = O and MeSiH = O. Model calculations on the decomposition pathways of H3SiOSiH3 and H3SiOSiH2OSiH3 show that silanone elimination is favoured over silylene extrusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness, including three algorithms using combined A- or D-optimality or PRESS statistic (Predicted REsidual Sum of Squares) with regularised orthogonal least squares algorithm respectively. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalisation scheme in orthogonal least squares or regularised orthogonal least squares has been extended such that the new algorithms are computationally efficient. A numerical example is included to demonstrate effectiveness of the algorithms. Copyright (C) 2003 IFAC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-, point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and. perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to pro-ram than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.