44 resultados para fault-tolerant quantum computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Thesis discusses the phenomenology of the dynamics of open quantum systems marked by non-Markovian memory effects. Non-Markovian open quantum systems are the focal point of a flurry of recent research aiming to answer, e.g., the following questions: What is the characteristic trait of non-Markovian dynamical processes that discriminates it from forgetful Markovian dynamics? What is the microscopic origin of memory in quantum dynamics, and how can it be controlled? Does the existence of memory effects open new avenues and enable accomplishments that cannot be achieved with Markovian processes? These questions are addressed in the publications forming the core of this Thesis with case studies of both prototypical and more exotic models of open quantum systems. In the first part of the Thesis several ways of characterizing and quantifying non-Markovian phenomena are introduced. Their differences are then explored using a driven, dissipative qubit model. The second part of the Thesis focuses on the dynamics of a purely dephasing qubit model, which is used to unveil the origin of non-Markovianity for a wide class of dynamical models. The emergence of memory is shown to be strongly intertwined with the structure of the spectral density function, as further demonstrated in a physical realization of the dephasing model using ultracold quantum gases. Finally, as an application of memory effects, it is shown that non- Markovian dynamical processes facilitate a novel phenomenon of timeinvariant discord, where the total quantum correlations of a system are frozen to their initial value. Non-Markovianity can also be exploited in the detection of phase transitions using quantum information probes, as shown using the physically interesting models of the Ising chain in a transverse field and a Coulomb chain undergoing a structural phase transition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this Thesis various aspects of memory effects in the dynamics of open quantum systems are studied. We develop a general theoretical framework for open quantum systems beyond the Markov approximation which allows us to investigate different sources of memory effects and to develop methods for harnessing them in order to realise controllable open quantum systems. In the first part of the Thesis a characterisation of non-Markovian dynamics in terms of information flow is developed and applied to study different sources of memory effects. Namely, we study nonlocal memory effects which arise due to initial correlations between two local environments and further the memory effects induced by initial correlations between the open system and the environment. The last part focuses on describing two all-optical experiment in which through selective preparation of the initial environment states the information flow between the system and the environment can be controlled. In the first experiment the system is driven from the Markovian to the non- Markovian regime and the degree of non-Markovianity is determined. In the second experiment we observe the nonlocal nature of the memory effects and provide a novel method to experimentally quantify frequency correlations in photonic environments via polarisation measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

After introducing the no-cloning theorem and the most common forms of approximate quantum cloning, universal quantum cloning is considered in detail. The connections it has with universal NOT-gate, quantum cryptography and state estimation are presented and briefly discussed. The state estimation connection is used to show that the amount of extractable classical information and total Bloch vector length are conserved in universal quantum cloning. The 1  2 qubit cloner is also shown to obey a complementarity relation between local and nonlocal information. These are interpreted to be a consequence of the conservation of total information in cloning. Finally, the performance of the 1  M cloning network discovered by Bužek, Hillery and Knight is studied in the presence of decoherence using the Barenco et al. approach where random phase fluctuations are attached to 2-qubit gates. The expression for average fidelity is calculated for three cases and it is found to depend on the optimal fidelity and the average of the phase fluctuations in a specific way. It is conjectured to be the form of the average fidelity in the general case. While the cloning network is found to be rather robust, it is nevertheless argued that the scalability of the quantum network implementation is poor by studying the effect of decoherence during the preparation of the initial state of the cloning machine in the 1 ! 2 case and observing that the loss in average fidelity can be large. This affirms the result by Maruyama and Knight, who reached the same conclusion in a slightly different manner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The bedrock of old crystalline cratons is characteristically saturated with brittle structures formed during successive superimposed episodes of deformation and under varying stress regimes. As a result, the crust effectively deforms through the reactivation of pre-existing structures rather than by through the activation, or generation, of new ones, and is said to be in a state of 'structural maturity'. By combining data from Olkiluoto Island, southwestern Finland, which has been investigated as the potential site of a deep geological repository for high-level nuclear waste, with observations from southern Sweden, it can be concluded that the southern part of the Svecofennian shield had already attained structural maturity during the Mesoproterozoic era. This indicates that the phase of activation of the crust, i.e. the time interval during which new fractures were generated, was brief in comparison to the subsequent reactivation phase. Structural maturity of the bedrock was also attained relatively rapidly in Namaqualand, western South Africa, after the formation of first brittle structures during Neoproterozoic time. Subsequent brittle deformation in Namaqualand was controlled by the reactivation of pre-existing strike-slip faults.In such settings, seismic events are likely to occur through reactivation of pre-existing zones that are favourably oriented with respect to prevailing stresses. In Namaqualand, this is shown for present day seismicity by slip tendency analysis, and at Olkiluoto, for a Neoproterozoic earthquake reactivating a Mesoproterozoic fault. By combining detailed field observations with the results of paleostress inversions and relative and absolute time constraints, seven distinctm superimposed paleostress regimes have been recognized in the Olkiluoto region. From oldest to youngest these are: (1) NW-SE to NNW-SSE transpression, which prevailed soon after 1.75 Ga, when the crust had sufficiently cooled down to allow brittle deformation to occur. During this phase conjugate NNW-SSE and NE-SW striking strike-slip faults were active simultaneous with reactivation of SE-dipping low-angle shear zones and foliation planes. This was followed by (2) N-S to NE-SW transpression, which caused partial reactivation of structures formed in the first event; (3) NW-SE extension during the Gothian orogeny and at the time of rapakivi magmatism and intrusion of diabase dikes; (4) NE-SW transtension that occurred between 1.60 and 1.30 Ga and which also formed the NW-SE-trending Satakunta graben located some 20 km north of Olkiluoto. Greisen-type veins also formed during this phase. (5) NE-SW compression that postdates both the formation of the 1.56 Ga rapakivi granites and 1.27 Ga olivine diabases of the region; (6) E-W transpression during the early stages of the Mesoproterozoic Sveconorwegian orogeny and which also predated (7) almost coaxial E-W extension attributed to the collapse of the Sveconorwegian orogeny. The kinematic analysis of fracture systems in crystalline bedrock also provides a robust framework for evaluating fluid-rock interaction in the brittle regime; this is essential in assessment of bedrock integrity for numerous geo-engineering applications, including groundwater management, transient or permanent CO2 storage and site investigations for permanent waste disposal. Investigations at Olkiluoto revealed that fluid flow along fractures is coupled with low normal tractions due to in-situ stresses and thus deviates from the generally accepted critically stressed fracture concept, where fluid flow is concentrated on fractures on the verge of failure. The difference is linked to the shallow conditions of Olkiluoto - due to the low differential stresses inherent at shallow depths, fracture activation and fluid flow is controlled by dilation due to low normal tractions. At deeper settings, however, fluid flow is controlled by fracture criticality caused by large differential stress, which drives shear deformation instead of dilation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Invokaatio: D.F.G.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dedikaatio: Henricus Florinus, Jonas Petrejus, Jacobus Lvnd, Jsaacus Piilman, Ericus Ehrling, Nicolaus Procopaeus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis the basic structure and operational principals of single- and multi-junction solar cells are considered and discussed. Main properties and characteristics of solar cells are briefly described. Modified equipment for measuring the quantum efficiency for multi-junction solar cell is presented. Results of experimental research single- and multi-junction solar cells are described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent Storms in Nordic countries were a reason of long power outages in huge territories. After these disasters distribution networks' operators faced with a problem how to provide adequate quality of supply in such situation. The decision of utilization cable lines rather than overhead lines were made, which brings new features to distribution networks. The main idea of this work is a complex analysis of medium voltage distribution networks with long cable lines. High value of cable’s specific capacitance and length of lines determine such problems as: high values of earth fault currents, excessive amount of reactive power flow from distribution to transmission network, possibility of a high voltage level at the receiving end of cable feeders. However the core tasks was to estimate functional ability of the earth fault protection and the possibility to utilize simplified formulas for operating setting calculations in this network. In order to provide justify solution or evaluation of mentioned above problems corresponding calculations were made and in order to analyze behavior of relay protection principles PSCAD model of the examined network have been created. Evaluation of the voltage rise in the end of a cable line have educed absence of a dangerous increase in a voltage level, while excessive value of reactive power can be a reason of final penalty according to the Finish regulations. It was proved and calculated that for this networks compensation of earth fault currents should be implemented. In PSCAD models of the electrical grid with isolated neutral, central compensation and hybrid compensation were created. For the network with hybrid compensation methodology which allows to select number and rated power of distributed arc suppression coils have been offered. Based on the obtained results from experiments it was determined that in order to guarantee selective and reliable operation of the relay protection should be utilized hybrid compensation with connection of high-ohmic resistor. Directional and admittance based relay protection were tested under these conditions and advantageous of the novel protection were revealed. However, for electrical grids with extensive cabling necessity of a complex approach to the relay protection were explained and illustrated. Thus, in order to organize reliable earth fault protection is recommended to utilize both intermittent and conventional relay protection with operational settings calculated by the use of simplified formulas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.