971 resultados para Fast Computation Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An algorithm inspired on ant behavior is developed in order to find out the topology of an electric energy distribution network with minimum power loss. The algorithm performance is investigated in hypothetical and actual circuits. When applied in an actual distribution system of a region of the State of Sao Paulo (Brazil), the solution found by the algorithm presents loss lower than the topology built by the concessionary company.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the design and implementation of an embedded soft sensor, i. e., a generic and autonomous hardware module, which can be applied to many complex plants, wherein a certain variable cannot be directly measured. It is implemented based on a fuzzy identification algorithm called ""Limited Rules"", employed to model continuous nonlinear processes. The fuzzy model has a Takagi-Sugeno-Kang structure and the premise parameters are defined based on the Fuzzy C-Means (FCM) clustering algorithm. The firmware contains the soft sensor and it runs online, estimating the target variable from other available variables. Tests have been performed using a simulated pH neutralization plant. The results of the embedded soft sensor have been considered satisfactory. A complete embedded inferential control system is also presented, including a soft sensor and a PID controller. (c) 2007, ISA. Published by Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the single machine scheduling problem with a common due date aiming to minimize earliness and tardiness penalties. Due to its complexity, most of the previous studies in the literature deal with this problem using heuristics and metaheuristics approaches. With the intention of contributing to the study of this problem, a branch-and-bound algorithm is proposed. Lower bounds and pruning rules that exploit properties of the problem are introduced. The proposed approach is examined through a computational comparative study with 280 problems involving different due date scenarios. In addition, the values of optimal solutions for small problems from a known benchmark are provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eucalyptus is the dominant and most productive planted forest in Brazil, covering around 3.4 million ha for the production of charcoal, pulp, sawtimber, timber plates, wood foils, plywood and for building purposes. At the early establishment of the forest plantations, during the second half of the 1960s, the eucalypt yield was 10 m(3) ha(-1) y(-1). Now, as a result of investments in research and technology, the average productivity is 38 m3 ha(-1) y(-1). The productivity restrictions are related to the following environmental factors, in order of importance: water deficits > nutrient deficiency > soil depth and strength. The clonal forests have been fundamental in sites with larger water and nutrient restrictions, where they out-perform those established from traditional seed-based planting stock. When the environmental limitations are small the productivities of plantations based on clones or seeds appear to be similar. In the long term there are risks to sustainability, because of the low fertility and low reserves of primary minerals in the soils, which are, commonly, loamy and clayey oxisols and ultisols. Usually, a decline of soil quality is caused by management that does not conserve soil and site resources, damages soil physical and chemical characteristics, and insufficient or unbalanced fertiliser management. The problem is more serious when fast-growing genotypes are planted, which have a high nutrient demand and uptake capacity, and therefore high nutrient output through harvesting. The need to mobilise less soil by providing more cover and protection, reduce the nutrient and organic matter losses, preserve crucial physical properties as permeability ( root growth, infiltration and aeration), improve weed control and reduce costs has led to a progressive increase in the use of minimum cultivation practices during the last 20 years, which has been accepted as a good alternative to keep or increase site quality in the long term. In this paper we provide a synthesis and critical appraisal of the research results and practical implications of early silvicultural management on long-term site productivity of fast-growing eucalypt plantations arising from the Brazilian context.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

By allowing the estimation of forest structural and biophysical characteristics at different temporal and spatial scales, remote sensing may contribute to our understanding and monitoring of planted forests. Here, we studied 9-year time-series of the Normalized Difference Vegetation Index (NDVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) on a network of 16 stands in fast-growing Eucalyptus plantations in Sao Paulo State, Brazil. We aimed to examine the relationships between NDVI time-series spanning entire rotations and stand structural characteristics (volume, dominant height, mean annual increment) in these simple forest ecosystems. Our second objective was to examine spatial and temporal variations of light use efficiency for wood production, by comparing time-series of Absorbed Photosynthetically Active Radiation (APAR) with inventory data. Relationships were calibrated between the NDVI and the fractions of intercepted diffuse and direct radiation, using hemispherical photographs taken on the studied stands at two seasons. APAR was calculated from the NDVI time-series using these relationships. Stem volume and dominant height were strongly correlated with summed NDVI values between planting date and inventory date. Stand productivity was correlated with mean NDVI values. APAR during the first 2 years of growth was variable between stands and was well correlated with stem wood production (r(2) = 0.78). In contrast, APAR during the following years was less variable and not significantly correlated with stem biomass increments. Production of wood per unit of absorbed light varied with stand age and with site index. In our study, a better site index was accompanied both by increased APAR during the first 2 years of growth and by higher light use efficiency for stem wood production during the whole rotation. Implications for simple process-based modelling are discussed. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method was optimized for the analysis of omeprazole (OMZ) by ultra-high speed LC with diode array detection using a monolithic Chromolith Fast Gradient RP 18 endcapped column (50 x 2.0 mm id). The analyses were performed at 30 degrees C using a mobile phase consisting of 0.15% (v/v) trifluoroacetic acid (TFA) in water (solvent A) and 0.15% (v/v) TFA in acetonitrile (solvent B) under a linear gradient of 5 to 90% B in 1 min at a flow rate of 1.0 mL/min and detection at 220 nm. Under these conditions, OMZ retention time was approximately 0.74 min. Validation parameters, such as selectivity, linearity, precision, accuracy, and robustness, showed results within the acceptable criteria. The method developed was successfully applied to OMZ enteric-coated pellets, showing that this assay can be used in the pharmaceutical industry for routine QC analysis. Moreover, the analytical conditions established allow for the simultaneous analysis of OMZ metabolites, 5-hydroxyomeprazole and omeprazole sulfone, in the same run, showing that this method can be extended to other matrixes with adequate procedures for sample preparation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the necessity to differentiate chemical species of mercury in clinical specimens, there area limited number of methods for this purpose. Then, this paper describes a simple method for the determination of methylmercury and inorganic mercury in blood by using liquid chromatography with inductively coupled mass spectrometry (LC-ICP-MS) and a fast sample preparation procedure. Prior to analysis, blood (250 mu L) is accurately weighed into 15-mL conical tubes. Then, an extractant solution containing mercaptoethanol, L-cysteine and HCI was added to the samples following sonication for 15 min. Quantitative mercury extraction was achieved with the proposed procedure. Separation of mercury species was accomplished in less than 5 min on a C18 reverse-phase column with a mobile phase containing 0.05% (v/v) mercaptoethanol, 0.4% (m/v) L-cysteine, 0.06 mol L(-1) ammonium acetate and 5% (v/v) methanol. The method detection limits were found to be 0.25 mu g L(-1) and 0.1 mu Lg L(-1) for inorganic mercury and methylmercury, respectively. Method accuracy is traceable to Standard Reference Material (SRM) 966 Toxic Metals in Bovine Blood from the National Institute of Standards and Technology (NIST). The proposed method was also applied to the speciation of mercury in blood samples collected from fish-eating communities and from rats exposed to thimerosal. With the proposed method there is a considerable reduction of the time of sample preparation prior to speciation of Hg by LC-ICP-MS. Finally, after the application of the proposed method, we demonstrated an interesting in vivo ethylmercury conversion to inorganic mercury. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple method for mercury speciation in hair samples with a fast sample preparation procedure using high-performance liquid chromatography coupled to inductively coupled plasma mass spectrometry is proposed. Prior to analysis, 50 mg of hair samples were accurately weighed into 15 mL conical tubes. Then, an extractant solution containing mercaptoethanol, L-cysteine and HCl was added to the samples following sonication for 10 min. Quantitative mercury extraction was achieved with the proposed procedure. Separation of inorganic mercury (Ino-Hg), methylmercury (Met-Hg) and ethylmercury (Et-Hg) was accomplished in less than 8 min on a C18 reverse phase column with a mobile phase containing 0.05% v/v mercaptoethanol, 0.4% m/v L-cysteine, 0.06 mol L(-1) ammonium acetate and 5% v/v methanol. The method detection limits were found to be 15 ng g(-1), 10 ng g(-1) and 38 ng g(-1), for inorganic mercury, methylmercury and ethylmercury, respectively. Sample throughput is 4 samples h(-1) (duplicate). A considerable improvement in the time of analysis was achieved when compared to other published methods. Method accuracy is traceable to Certified Reference Materials (CRMs) 85 and 86 human hair from the International Atomic Energy Agency (IAEA). Finally, the proposed method was successfully applied to the speciation of mercury in hair samples collected from fish-eating communities of the Brazilian Amazon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple and fast method is described for simultaneous determination of methylmercury (MeHg), ethylmercury (Et-Hg) and inorganic mercury (Ino-Hg) in blood samples by using capillary gas chromatography-inductively coupled plasma mass spectrometry (GC-ICP-MS) after derivatization and alkaline digestion. Closed-vessel microwave assisted digestion conditions with tetramethylammonium hydroxide (TMAH) have been optimized. Derivatization by using ethylation and propylation procedures have also been evaluated and compared. The absolute detection limits (using a 1 mu L injection) obtained by GC-ICP-MS with ethylation were 40 fg for MeHg and Ino-Hg, respectively, and with propylation were 50, 20 and 50 fg for MeHg, Et-Hg and Ino-Hg, respectively. Method accuracy is traceable to Standard Reference Material (SRM) 966 Toxic Metals in Bovine Blood from the National Institute of Standards and Technology (NIST). Additional validation is provided based on the comparison of results obtained for mercury speciation in blood samples with the proposed procedure and with a previously reported LC-ICP-MS method. With the new proposed procedure no tedious clean-up steps are required and a considerable improvement of the time of analysis was achieved compared to other methods using GC separation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The one-way quantum computing model introduced by Raussendorf and Briegel [Phys. Rev. Lett. 86, 5188 (2001)] shows that it is possible to quantum compute using only a fixed entangled resource known as a cluster state, and adaptive single-qubit measurements. This model is the basis for several practical proposals for quantum computation, including a promising proposal for optical quantum computation based on cluster states [M. A. Nielsen, Phys. Rev. Lett. (to be published), quant-ph/0402005]. A significant open question is whether such proposals are scalable in the presence of physically realistic noise. In this paper we prove two threshold theorems which show that scalable fault-tolerant quantum computation may be achieved in implementations based on cluster states, provided the noise in the implementations is below some constant threshold value. Our first threshold theorem applies to a class of implementations in which entangling gates are applied deterministically, but with a small amount of noise. We expect this threshold to be applicable in a wide variety of physical systems. Our second threshold theorem is specifically adapted to proposals such as the optical cluster-state proposal, in which nondeterministic entangling gates are used. A critical technical component of our proofs is two powerful theorems which relate the properties of noisy unitary operations restricted to act on a subspace of state space to extensions of those operations acting on the entire state space. We expect these theorems to have a variety of applications in other areas of quantum-information science.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel strategy for fast NMR resonance assignment of N-15 HSQC spectra of proteins is presented. It requires the structure coordinates of the protein, a paramagnetic center, and one or more residue-selectively N-15-labeled samples. Comparison of sensitive undecoupled N-15 HSQC spectra recorded of paramagnetic and diamagnetic samples yields data for every cross-peak on pseudocontact shift, paramagnetic relaxation enhancement, cross-correlation between Curie-spin and dipole-dipole relaxation, and residual dipolar coupling. Comparison of these four different paramagnetic quantities with predictions from the three-dimensional structure simultaneously yields the resonance assignment and the anisotropy of the susceptibility tensor of the paramagnetic center. The method is demonstrated with the 30 kDa complex between the N-terminal domain of the epsilon subunit and the theta subunit of Escherichia Coll DNA polymerase III. The program PLATYPUS was developed to perform the assignment, provide a measure of reliability of the assignment, and determine the susceptibility tensor anisotropy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.