943 resultados para fixed point method
Resumo:
We obtain the finite-temperature unconditional master equation of the density matrix for two coupled quantum dots (CQD's) when one dot is subjected to a measurement of its electron occupation number using a point contact (PC). To determine how the CQD system state depends on the actual current through the PC device, we use the so-called quantum trajectory method to derive the zero-temperature conditional master equation. We first treat the electron tunneling through the PC barrier as a classical stochastic point process (a quantum-jump model). Then we show explicitly that our results can be extended to the quantum-diffusive limit when the average electron tunneling rate is very large compared to the extra change of the tunneling rate due to the presence of the electron in the dot closer to the PC. We find that in both quantum-jump and quantum-diffusive cases, the conditional dynamics of the CQD system can be described by the stochastic Schrodinger equations for its conditioned state vector if and only if the information carried away from the CQD system by the PC reservoirs can be recovered by the perfect detection of the measurements.
Resumo:
A new cloud-point extraction and preconcentration method, using a cationic, surfactant, Aliquat-336 (tricaprylyl-methy;ammonium chloride), his-been developed for the determination of cyanobacterial toxins, microcystins, in natural waters. Sodium sulfate was used to induce phase separation at 25 degreesC. The phase behavior of Aliquat-336 with respect to concentration of Na2SO4 was studied. The cloud-point system revealed a very high phase volume ratio compared to other established systems of nonionic, anionic, and cationic surfactants: At pH 6-7, it showed an outstanding selectivity in ahalyte extraction for anionic species. Only MC-LR and MC-YR, which are known to be predominantly anionic, were extracted (with averaged recoveries of 113.9 +/- 9% and 87.1 +/- 7%, respectively). MC-RR, which is likely to be amphoteric at the above pH range, was. not cle tectable in.the extract. Coupled to HPLC/UV separation and detection, the cloud-point extraction method (with 2.5 mM Aliquat-336 and 75 mM Na2SO4 at 25 degreesC) offered detection limits of 150 +/- 7 and 470 +/- 72 pg/mL for MC-LR and MC-YR, respectively, in 25 mL of deionized water. Repeatability of the method was 7.6% for MC-LR and 7.3% for MC-YR: The cloud-point extraction process can be. completed within 10-15 min with no cleanup steps required. Applicability of the new method to the determination of microcystins in real samples was demonstrated using natural surface waters, collected from a local river and a local duck pond spiked with realistic. concentrations of microcystins. Effects of salinity and organic matter (TOC) content in the water sample on the extraction efficiency were also studied.
Resumo:
High index Differential Algebraic Equations (DAEs) force standard numerical methods to lower order. Implicit Runge-Kutta methods such as RADAU5 handle high index problems but their fully implicit structure creates significant overhead costs for large problems. Singly Diagonally Implicit Runge-Kutta (SDIRK) methods offer lower costs for integration. This paper derives a four-stage, index 2 Explicit Singly Diagonally Implicit Runge-Kutta (ESDIRK) method. By introducing an explicit first stage, the method achieves second order stage calculations. After deriving and solving appropriate order conditions., numerical examples are used to test the proposed method using fixed and variable step size implementations. (C) 2001 IMACS. Published by Elsevier Science B.V. All rights reserved.
Resumo:
Trials conducted in Queensland, Australia between 1997 and 2002 demonstrated that fungicides belonging to the triazole group were the most effective in minimising the severity of infection of sorghum by Claviceps africana, the causal agent of sorghum ergot. Triadimenol ( as Bayfidan 250EC) at 0.125 kg a. i./ha was the most effective fungicide. A combination of the systemic activated resistance compound acibenzolar-S-methyl ( as Bion 50WG) at 0.05 kg a. i./ha and mancozeb ( as Penncozeb 750DF) at 1.5 kg a. i./ha has the potential to provide protection against the pathogen, should triazole-resistant isolates be detected. Timing and method of fungicide application are important. Our results suggest that the triazole fungicides have no systemic activity in sorghum panicles, necessitating the need for multiple applications from first anthesis to the end of flowering, whereas acibenzolar-S-methyl is most effective when applied 4 days before flowering. The flat fan nozzles tested in the trials provided higher levels of protection against C. africana and greater droplet deposition on panicles than the tested hollow cone nozzles. Application of triadimenol by a fixed wing aircraft was as efficacious as application through a tractor-mounted boom spray.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Frequency deviation is a common problem for power system signal processing. Many power system measurements are carried out in a fixed sampling rate assuming the system operates in its nominal frequency (50 or 60 Hz). However, the actual frequency may deviate from the normal value from time to time due to various reasons such as disturbances and subsequent system transients. Measurement of signals based on a fixed sampling rate may introduce errors under such situations. In order to achieve high precision signal measurement appropriate algorithms need to be employed to reduce the impact from frequency deviation in the power system data acquisition process. This paper proposes an advanced algorithm to enhance Fourier transform for power system signal processing. The algorithm is able to effectively correct frequency deviation under fixed sampling rate. Accurate measurement of power system signals is essential for the secure and reliable operation of power systems. The algorithm is readily applicable to such occasions where signal processing is affected by frequency deviation. Both mathematical proof and numerical simulation are given in this paper to illustrate robustness and effectiveness of the proposed algorithm. Crown Copyright (C) 2003 Published by Elsevier Science B.V. All rights reserved.
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.
Resumo:
This paper aims to present a contrastive approach between three different ways of building concepts after proving the similar syntactic possibilities that coexist in terms. However, from the semantic point of view we can see that each language family has a different distribution in meaning. But the most important point we try to show is that the differences found in the psychological process when communicating concepts should guide the translator and the terminologist in the target text production and the terminology planning process. Differences between languages in the information transmission process are due to the different roles the different types of knowledge play. We distinguish here the analytic-descriptive knowledge and the analogical knowledge among others. We also state that none of them is the best when determining the correctness of a term, but there has to be adequacy criteria in the selection process. This concept building or term building success is important when looking at the linguistic map of the information society.
Resumo:
While the earliest deadline first algorithm is known to be optimal as a uniprocessor scheduling policy, the implementation comes at a cost in terms of complexity. Fixed taskpriority algorithms on the other hand have lower complexity but higher likelihood of task sets being declared unschedulable, when compared to earliest deadline first (EDF). Various attempts have been undertaken to increase the chances of proving a task set schedulable with similar low complexity. In some cases, this was achieved by modifying applications to limit preemptions, at the cost of flexibility. In this work, we explore several variants of a concept to limit interference by locking down the ready queue at certain instances. The aim is to increase the prospects of schedulability of a given task system, without compromising on complexity or flexibility, when compared to the regular fixed task-priority algorithm. As a final contribution, a new preemption threshold assignment algorithm is provided which is less complex and more straightforward than the previous method available in the literature.
Resumo:
3D laser scanning is becoming a standard technology to generate building models of a facility's as-is condition. Since most constructions are constructed upon planar surfaces, recognition of them paves the way for automation of generating building models. This paper introduces a new logarithmically proportional objective function that can be used in both heuristic and metaheuristic (MH) algorithms to discover planar surfaces in a point cloud without exploiting any prior knowledge about those surfaces. It can also adopt itself to the structural density of a scanned construction. In this paper, a metaheuristic method, genetic algorithm (GA), is used to test this introduced objective function on a synthetic point cloud. The results obtained show the proposed method is capable to find all plane configurations of planar surfaces (with a wide variety of sizes) in the point cloud with a minor distance to the actual configurations. © 2014 IEEE.
Resumo:
The purpose of this paper is to discuss the linear solution of equality constrained problems by using the Frontal solution method without explicit assembling. Design/methodology/approach - Re-written frontal solution method with a priori pivot and front sequence. OpenMP parallelization, nearly linear (in elimination and substitution) up to 40 threads. Constraints enforced at the local assembling stage. Findings - When compared with both standard sparse solvers and classical frontal implementations, memory requirements and code size are significantly reduced. Research limitations/implications - Large, non-linear problems with constraints typically make use of the Newton method with Lagrange multipliers. In the context of the solution of problems with large number of constraints, the matrix transformation methods (MTM) are often more cost-effective. The paper presents a complete solution, with topological ordering, for this problem. Practical implications - A complete software package in Fortran 2003 is described. Examples of clique-based problems are shown with large systems solved in core. Social implications - More realistic non-linear problems can be solved with this Frontal code at the core of the Newton method. Originality/value - Use of topological ordering of constraints. A-priori pivot and front sequences. No need for symbolic assembling. Constraints treated at the core of the Frontal solver. Use of OpenMP in the main Frontal loop, now quantified. Availability of Software.
Resumo:
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores - Ramo de Sistemas Autónomos
Resumo:
27th Euromicro Conference on Real-Time Systems (ECRTS 2015), Lund, Sweden.
Resumo:
Parvovirus B19 infection was first discovered in 1975 and it is implicated in fetal death from hydrops fetalis the world over. Diagnosis is usually made through histological identification of the intranuclear inclusion in placenta and fetal organs. However, these cells may be scarce or uncharacteristic, making definitive diagnosis difficult. We analyzed histologically placentas and fetal organs from 34 cases of non-immune hydrops fetalis, stained with Hematoxylin and Eosin (HE) and submitted to immunohistochemistry and polymerase chain reaction (PCR). Of 34 tissue samples, two (5.9%) presented typical intranuclear inclusion in circulating normoblasts seen in Hematoxylin and Eosin stained sections, confirmed by immunohistochemistry and PCR. However, PCR of fetal organs was negative in one case in which the placenta PCR was positive. We concluded that parvovirus B19 infection frequency is similar to the literature and that immunohistochemistry was the best detection method. It is highly specific and sensitive, preserves the morphology and reveals a larger number of positive cells than does HE with the advantage of showing cytoplasmic and nuclear positivity, making it more reliable. Although PCR is more specific and sensitive in fresh or ideally fixed material it is not so in formalin-fixed paraffin-embedded tissues, frequently the only one available in such cases.