956 resultados para fault-tolerant quantum computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Herbicides used in Clearfield(r) rice system may persist in the environment, damaging non-tolerant crops sown in succession and/or rotation. These damages vary according to soil characteristics, climate and soil management. The thickness of the soil profile may affect carryover effect; deeper soils may allow these molecules to leach, reaching areas below the roots absorption zone. The aim of this study was to evaluate the effect of the thickness of soil profile in the carryover of imazethapyr + imazapic on ryegrass and non-tolerant rice, sown in succession and rotation to rice, respectively. Lysimeters of different thicknesses (15, 20, 30, 40, 50 and 65 cm) were constructed, where 1 L ha-1 of the imazethapyr + imazapic formulated mixture was applied in tolerant rice. Firstly, imidazolinone-tolerant rice was planted, followed by ryegrass and non-tolerant rice in succession and rotation, respectively. Herbicide injury, height reduction and dry weight of non-tolerant species were assessed. There was no visual symptoms of herbicide injury on ryegrass sown 128 days after the herbicide application; however it causes dry weight mass reduction of plants. The herbicides persist in the soil and cause injury in non-tolerant rice, sown 280 days after application, and the deeper the soil profile, the lower the herbicides injury on irrigated rice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to identify early abnormalities in non-insulin-dependent diabetes mellitus (NIDDM) we determined insulin (using an assay that does not cross-react with proinsulin) and proinsulin concentrations. The proinsulin/insulin ratio was used as an indicator of abnormal ß-cell function. The ratio of the first 30-min increase in insulin to glucose concentrations following the oral glucose tolerance test (OGTT; I30-0/G30-0) was taken as an indicator of insulin secretion. Insulin resistance (R) was evaluated by the homeostasis model assessment (HOMA) method. True insulin and proinsulin were measured during a 75-g OGTT in 35 individuals: 20 with normal glucose tolerance (NGT) and without diabetes among their first-degree relatives (FDR) served as controls, and 15 with NGT who were FDR of patients with NIDDM. The FDR group presented higher insulin (414 pmol/l vs 195 pmol/l; P = 0.04) and proinsulin levels (19.6 pmol/l vs 12.3 pmol/l; P = 0.03) post-glucose load than the control group. When these groups were stratified according to BMI, the obese FDR (N = 8) showed higher fasting and post-glucose insulin levels than the obese NGT (N = 9) (fasting: 64.8 pmol/l vs 7.8 pmol/l; P = 0.04, and 60 min post-glucose: 480.6 pmol/l vs 192 pmol/l; P = 0.01). Also, values for HOMA (R) were higher in the obese FDR compared to obese NGT (2.53 vs 0.30; P = 0.075). These results show that FDR of NIDDM patients have true hyperinsulinemia (which is not a consequence of cross-reactivity with proinsulin) and hyperproinsulinemia and no dysfunction of a qualitative nature in ß-cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The bedrock of old crystalline cratons is characteristically saturated with brittle structures formed during successive superimposed episodes of deformation and under varying stress regimes. As a result, the crust effectively deforms through the reactivation of pre-existing structures rather than by through the activation, or generation, of new ones, and is said to be in a state of 'structural maturity'. By combining data from Olkiluoto Island, southwestern Finland, which has been investigated as the potential site of a deep geological repository for high-level nuclear waste, with observations from southern Sweden, it can be concluded that the southern part of the Svecofennian shield had already attained structural maturity during the Mesoproterozoic era. This indicates that the phase of activation of the crust, i.e. the time interval during which new fractures were generated, was brief in comparison to the subsequent reactivation phase. Structural maturity of the bedrock was also attained relatively rapidly in Namaqualand, western South Africa, after the formation of first brittle structures during Neoproterozoic time. Subsequent brittle deformation in Namaqualand was controlled by the reactivation of pre-existing strike-slip faults.In such settings, seismic events are likely to occur through reactivation of pre-existing zones that are favourably oriented with respect to prevailing stresses. In Namaqualand, this is shown for present day seismicity by slip tendency analysis, and at Olkiluoto, for a Neoproterozoic earthquake reactivating a Mesoproterozoic fault. By combining detailed field observations with the results of paleostress inversions and relative and absolute time constraints, seven distinctm superimposed paleostress regimes have been recognized in the Olkiluoto region. From oldest to youngest these are: (1) NW-SE to NNW-SSE transpression, which prevailed soon after 1.75 Ga, when the crust had sufficiently cooled down to allow brittle deformation to occur. During this phase conjugate NNW-SSE and NE-SW striking strike-slip faults were active simultaneous with reactivation of SE-dipping low-angle shear zones and foliation planes. This was followed by (2) N-S to NE-SW transpression, which caused partial reactivation of structures formed in the first event; (3) NW-SE extension during the Gothian orogeny and at the time of rapakivi magmatism and intrusion of diabase dikes; (4) NE-SW transtension that occurred between 1.60 and 1.30 Ga and which also formed the NW-SE-trending Satakunta graben located some 20 km north of Olkiluoto. Greisen-type veins also formed during this phase. (5) NE-SW compression that postdates both the formation of the 1.56 Ga rapakivi granites and 1.27 Ga olivine diabases of the region; (6) E-W transpression during the early stages of the Mesoproterozoic Sveconorwegian orogeny and which also predated (7) almost coaxial E-W extension attributed to the collapse of the Sveconorwegian orogeny. The kinematic analysis of fracture systems in crystalline bedrock also provides a robust framework for evaluating fluid-rock interaction in the brittle regime; this is essential in assessment of bedrock integrity for numerous geo-engineering applications, including groundwater management, transient or permanent CO2 storage and site investigations for permanent waste disposal. Investigations at Olkiluoto revealed that fluid flow along fractures is coupled with low normal tractions due to in-situ stresses and thus deviates from the generally accepted critically stressed fracture concept, where fluid flow is concentrated on fractures on the verge of failure. The difference is linked to the shallow conditions of Olkiluoto - due to the low differential stresses inherent at shallow depths, fracture activation and fluid flow is controlled by dilation due to low normal tractions. At deeper settings, however, fluid flow is controlled by fracture criticality caused by large differential stress, which drives shear deformation instead of dilation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Invokaatio: D.F.G.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tolerance to lipopolysaccharide (LPS) occurs when animals or cells exposed to LPS become hyporesponsive to a subsequent challenge with LPS. This mechanism is believed to be involved in the down-regulation of cellular responses observed in septic patients. The aim of this investigation was to evaluate LPS-induced monocyte tolerance of healthy volunteers using whole blood. The detection of intracellular IL-6, bacterial phagocytosis and reactive oxygen species (ROS) was determined by flow cytometry, using anti-IL-6-PE, heat-killed Staphylococcus aureus stained with propidium iodide and 2',7'-dichlorofluorescein diacetate, respectively. Monocytes were gated in whole blood by combining FSC and SSC parameters and CD14-positive staining. The exposure to increasing LPS concentrations resulted in lower intracellular concentration of IL-6 in monocytes after challenge. A similar effect was observed with challenge with MALP-2 (a Toll-like receptor (TLR)2/6 agonist) and killed Pseudomonas aeruginosa and S. aureus, but not with flagellin (a TLR5 agonist). LPS conditioning with 15 ng/mL resulted in a 40% reduction of IL-6 in monocytes. In contrast, phagocytosis of P. aeruginosa and S. aureus and induced ROS generation were preserved or increased in tolerant cells. The phenomenon of tolerance involves a complex regulation in which the production of IL-6 was diminished, whereas the bacterial phagocytosis and production of ROS was preserved. Decreased production of proinflammatory cytokines and preserved or increased production of ROS may be an adaptation to control the deleterious effects of inflammation while preserving antimicrobial activity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dedikaatio: Henricus Florinus, Jonas Petrejus, Jacobus Lvnd, Jsaacus Piilman, Ericus Ehrling, Nicolaus Procopaeus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis the basic structure and operational principals of single- and multi-junction solar cells are considered and discussed. Main properties and characteristics of solar cells are briefly described. Modified equipment for measuring the quantum efficiency for multi-junction solar cell is presented. Results of experimental research single- and multi-junction solar cells are described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent Storms in Nordic countries were a reason of long power outages in huge territories. After these disasters distribution networks' operators faced with a problem how to provide adequate quality of supply in such situation. The decision of utilization cable lines rather than overhead lines were made, which brings new features to distribution networks. The main idea of this work is a complex analysis of medium voltage distribution networks with long cable lines. High value of cable’s specific capacitance and length of lines determine such problems as: high values of earth fault currents, excessive amount of reactive power flow from distribution to transmission network, possibility of a high voltage level at the receiving end of cable feeders. However the core tasks was to estimate functional ability of the earth fault protection and the possibility to utilize simplified formulas for operating setting calculations in this network. In order to provide justify solution or evaluation of mentioned above problems corresponding calculations were made and in order to analyze behavior of relay protection principles PSCAD model of the examined network have been created. Evaluation of the voltage rise in the end of a cable line have educed absence of a dangerous increase in a voltage level, while excessive value of reactive power can be a reason of final penalty according to the Finish regulations. It was proved and calculated that for this networks compensation of earth fault currents should be implemented. In PSCAD models of the electrical grid with isolated neutral, central compensation and hybrid compensation were created. For the network with hybrid compensation methodology which allows to select number and rated power of distributed arc suppression coils have been offered. Based on the obtained results from experiments it was determined that in order to guarantee selective and reliable operation of the relay protection should be utilized hybrid compensation with connection of high-ohmic resistor. Directional and admittance based relay protection were tested under these conditions and advantageous of the novel protection were revealed. However, for electrical grids with extensive cabling necessity of a complex approach to the relay protection were explained and illustrated. Thus, in order to organize reliable earth fault protection is recommended to utilize both intermittent and conventional relay protection with operational settings calculated by the use of simplified formulas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Four problems of physical interest have been solved in this thesis using the path integral formalism. Using the trigonometric expansion method of Burton and de Borde (1955), we found the kernel for two interacting one dimensional oscillators• The result is the same as one would obtain using a normal coordinate transformation, We next introduced the method of Papadopolous (1969), which is a systematic perturbation type method specifically geared to finding the partition function Z, or equivalently, the Helmholtz free energy F, of a system of interacting oscillators. We applied this method to the next three problems considered• First, by summing the perturbation expansion, we found F for a system of N interacting Einstein oscillators^ The result obtained is the same as the usual result obtained by Shukla and Muller (1972) • Next, we found F to 0(Xi)f where A is the usual Tan Hove ordering parameter* The results obtained are the same as those of Shukla and Oowley (1971), who have used a diagrammatic procedure, and did the necessary sums in Fourier space* We performed the work in temperature space• Finally, slightly modifying the method of Papadopolous, we found the finite temperature expressions for the Debyecaller factor in Bravais lattices, to 0(AZ) and u(/K/ j,where K is the scattering vector* The high temperature limit of the expressions obtained here, are in complete agreement with the classical results of Maradudin and Flinn (1963) .