818 resultados para Linear matrix inequalities (LMI) techniques
Resumo:
Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.
Resumo:
We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.
Resumo:
This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.
Resumo:
The municipal management in any country of the globe requires planning and allocation of resources evenly. In Brazil, the Law of Budgetary Guidelines (LDO) guides municipal managers toward that balance. This research develops a model that seeks to find the balance of the allocation of public resources in Brazilian municipalities, considering the LDO as a parameter. For this using statistical techniques and multicriteria analysis as a first step in order to define allocation strategies, based on the technical aspects arising from the municipal manager. In a second step, presented in linear programming based optimization where the objective function is derived from the preference of the results of the manager and his staff. The statistical representation is presented to support multicriteria development in the definition of replacement rates through time series. The multicriteria analysis was structured by defining the criteria, alternatives and the application of UTASTAR methods to calculate replacement rates. After these initial settings, an application of linear programming was developed to find the optimal allocation of enforcement resources of the municipal budget. Data from the budget of a municipality in southwestern Paraná were studied in the application of the model and analysis of results.
Resumo:
International audience
Resumo:
Agricultural crops can be damaged by funguses, insects, worms and other organisms that cause diseases and decrease the yield of production. The effect of these damaging agents can be reduced using pesticides. Among them, triazole compounds are effective substances against fungus; for example, Oidium. Nevertheless, it has been detected that the residues of these fungicides in foods as well as in derivate products can affect the health of the consumers. Therefore, the European Union has established several regulations fixing the maximum residue of pesticide levels in a wide range of foods trying to assure the consumer safety. Hence, it is very important to develop adequate methods to determine these pesticide compounds. In most cases, gas or liquid chromatographic (GC, LC) separations are used in the analysis of the samples. But firstly, it is necessary to use proper sample treatments in order to preconcentrate and isolate the target analytes. To reach this aim, microextraction techniques are very effective tools; because allow to do both preconcentration and extraction of the analytes in one simple step that considerably reduces the source of errors. With these objectives, two remarkable techniques have been widely used during the last years: solid phase microextraction (SPME) and liquid phase microextraction (LPME) with its different options. Both techniques that avoid the use or reduce the amount of toxic solvents are convenient coupled to chromatographic equipments providing good quantitative results in a wide number of matrices and compounds. In this work simple and reliable methods have been developed using SPME and ultrasound assisted emulsification microextraction (USAEME) coupled to GC or LC for triazole fungicides determination. The proposed methods allow confidently determine triazole concentrations of μg L‐1 order in different fruit samples. Chemometric tools have been used to accomplish successful determinations. Firstly, in the selection and optimization of the variables involved in the microextraction processes; and secondly, to overcome the problems related to the overlapping peaks. Different fractional factorial designs have been used for the screening of the experimental variables; and central composite designs have been carried out to get the best experimental conditions. Trying to solve the overlapping peak problems multivariate calibration methods have been used. Parallel Factor Analysis 2 (PARAFAC2), Multivariate Curve Resolution (MCR) and Parallel Factor Analysis with Linear Dependencies (PARALIND) have been proposed, the adequate algorithms have been used according to data characteristics, and the results have been compared. Because its occurrence in Basque Country and its relevance in the production of cider and txakoli regional wines the grape and apple samples were selected. These crops are often treated with triazole compounds trying to solve the problems caused by the funguses. The peel and pulp from grape and apple, their juices and some commercial products such as musts, juice and cider have been analysed showing the adequacy of the developed methods for the triazole determination in this kind of fruit samples.
Resumo:
One of waste produced on large scale during the well drilling is the gravel drilling. There are techniques for the treatment of the same, but there isn t consensus on what are the best in terms of economic and environmental. One alternative for disposal of this waste and objective of this paper is the incorporation and immobilization of gravel clay matrix to assess their technological properties. The Raw Materials used were characterized by the following techniques: Chemical Analysis by X-ray fluorescence (XRF), mineralogical analysis by X-ray Diffraction (XRD), Grain Size Analysis (FA) and Thermal Analysis by Thermogravimetry (TG) and thermodiferential (DTA). After characterizing, samples were formulated in the following percentages: 0, 5, 10, 15, 25, 50, 75, 100% (weight) of gravel drilling, then the pieces were pressed, dried (110 ° C) and sintered at temperatures of 850, 950 and 1050 ° C. After sintering, samples were tested for water absorption, linear shrinkage, flexural strength, porosity, density, XRD and test color. The results concluded that the incorporation of gravel drilling is a viable possibility for solid masonry bricks and ceramic blocks manufacture at concentrations and firing temperature described here. Residue incorporation reduces an environmental problem, the cost of raw materials for manufacture of ceramic products
Resumo:
Global land cover maps play an important role in the understanding of the Earth's ecosystem dynamic. Several global land cover maps have been produced recently namely, Global Land Cover Share (GLC-Share) and GlobeLand30. These datasets are very useful sources of land cover information and potential users and producers are many times interested in comparing these datasets. However these global land cover maps are produced based on different techniques and using different classification schemes making their interoperability in a standardized way a challenge. The Environmental Information and Observation Network (EIONET) Action Group on Land Monitoring in Europe (EAGLE) concept was developed in order to translate the differences in the classification schemes into a standardized format which allows a comparison between class definitions. This is done by elaborating an EAGLE matrix for each classification scheme, where a bar code is assigned to each class definition that compose a certain land cover class. Ahlqvist (2005) developed an overlap metric to cope with semantic uncertainty of geographical concepts, providing this way a measure of how geographical concepts are more related to each other. In this paper, the comparison of global land cover datasets is done by translating each land cover legend into the EAGLE bar coding for the Land Cover Components of the EAGLE matrix. The bar coding values assigned to each class definition are transformed in a fuzzy function that is used to compute the overlap metric proposed by Ahlqvist (2005) and overlap matrices between land cover legends are elaborated. The overlap matrices allow the semantic comparison between the classification schemes of each global land cover map. The proposed methodology is tested on a case study where the overlap metric proposed by Ahlqvist (2005) is computed in the comparison of two global land cover maps for Continental Portugal. The study resulted with the overlap spatial distribution among the two global land cover maps, Globeland30 and GLC-Share. These results shows that Globeland30 product overlap with a degree of 77% with GLC-Share product in Continental Portugal.
Resumo:
Matrix converters convert a three-phase alternating-current power supply to a power supply of a different peak voltage and frequency, and are an emerging technology in a wide variety of applications. However, they are susceptible to an instability, whose behaviour is examined herein. The desired “steady-state” mode of operation of the matrix converter becomes unstable in a Hopf bifurcation as the output/input voltage transfer ratio, q, is increased through some threshold value, qc. Through weakly nonlinear analysis and direct numerical simulation of an averaged model, we show that this bifurcation is subcritical for typical parameter values, leading to hysteresis in the transition to the oscillatory state: there may thus be undesirable large-amplitude oscillations in the output voltages even when q is below the linear stability threshold value qc.
Resumo:
We explore the recently developed snapshot-based dynamic mode decomposition (DMD) technique, a matrix-free Arnoldi type method, to predict 3D linear global flow instabilities. We apply the DMD technique to flows confined in an L-shaped cavity and compare the resulting modes to their counterparts issued from classic, matrix forming, linear instability analysis (i.e. BiGlobal approach) and direct numerical simulations. Results show that the DMD technique, which uses snapshots generated by a 3D non-linear incompressible discontinuous Galerkin Navier?Stokes solver, provides very similar results to classical linear instability analysis techniques. In addition, we compare DMD results issued from non-linear and linearised Navier?Stokes solvers, showing that linearisation is not necessary (i.e. base flow not required) to obtain linear modes, as long as the analysis is restricted to the exponential growth regime, that is, flow regime governed by the linearised Navier?Stokes equations, and showing the potential of this type of analysis based on snapshots to general purpose CFD codes, without need of modifications. Finally, this work shows that the DMD technique can provide three-dimensional direct and adjoint modes through snapshots provided by the linearised and adjoint linearised Navier?Stokes equations advanced in time. Subsequently, these modes are used to provide structural sensitivity maps and sensitivity to base flow modification information for 3D flows and complex geometries, at an affordable computational cost. The information provided by the sensitivity study is used to modify the L-shaped geometry and control the most unstable 3D mode.
Resumo:
Coprime and nested sampling are well known deterministic sampling techniques that operate at rates significantly lower than the Nyquist rate, and yet allow perfect reconstruction of the spectra of wide sense stationary signals. However, theoretical guarantees for these samplers assume ideal conditions such as synchronous sampling, and ability to perfectly compute statistical expectations. This thesis studies the performance of coprime and nested samplers in spatial and temporal domains, when these assumptions are violated. In spatial domain, the robustness of these samplers is studied by considering arrays with perturbed sensor locations (with unknown perturbations). Simplified expressions for the Fisher Information matrix for perturbed coprime and nested arrays are derived, which explicitly highlight the role of co-array. It is shown that even in presence of perturbations, it is possible to resolve $O(M^2)$ under appropriate conditions on the size of the grid. The assumption of small perturbations leads to a novel ``bi-affine" model in terms of source powers and perturbations. The redundancies in the co-array are then exploited to eliminate the nuisance perturbation variable, and reduce the bi-affine problem to a linear underdetermined (sparse) problem in source powers. This thesis also studies the robustness of coprime sampling to finite number of samples and sampling jitter, by analyzing their effects on the quality of the estimated autocorrelation sequence. A variety of bounds on the error introduced by such non ideal sampling schemes are computed by considering a statistical model for the perturbation. They indicate that coprime sampling leads to stable estimation of the autocorrelation sequence, in presence of small perturbations. Under appropriate assumptions on the distribution of WSS signals, sharp bounds on the estimation error are established which indicate that the error decays exponentially with the number of samples. The theoretical claims are supported by extensive numerical experiments.
Resumo:
Over recent years the structural ceramics industry in Brazil has found a very favorable market for growth. However, difficulties related to productivity and product quality are partially inhibiting this possible growth. An alternative for trying to solve these problems and, thus, provide the pottery industry the feasibility of full development, is the substitution of firewood used in the burning process by natural gas. In order to contribute to this process of technological innovation, this paper studies the effect of co-use of ceramic phyllite and kaolin waste on the properties of a clay matrix, verifying the possible benefits that these raw materials can give to the final product, as well as the possibility of such materials to reduce the heat load necessary to obtain products with equal or superior quality. The study was divided into two steps: characterization of materials and study of formulations. Two clays, a phyllite and a residue of kaolin were characterized by the following techniques: laser granulometry, plasticity index by Atterberg limits, X-ray fluorescence, X-ray diffraction, mineralogical composition by Rietveld, thermogravimetric and differential thermal analysis. To study the formulations, specifically for evaluation of technological properties of the parts, was performed an experimental model that combined planning involving a mixture of three components (standard mass x phyllite x kaolin waste) and a 23 factorial design with central point associated with thermal processing parameters. The experiment was performed with restricted strip-plot randomization. In total, 13 compositional points were investigated within the following constraints: phyllite ≤ 20% by weight, kaolin waste ≤ 40% by weight, and standard mass ≥ 60% by weight. The thermal parameters were used at the following levels: 750 and 950 °C to the firing temperature, 5 and 15 °C/min at the heating rate, 15 and 45min to the baseline. The results showed that the introduction of phyllite and/or kaolin waste in ceramic body produced a number of benefits in properties of the final product, such as: decreased absorption of water, apparent porosity and linear retraction at burn; besides the increase in apparent specific mass and mechanical properties of parts. The best results were obtained in the compositional points where the sum of the levels of kaolin waste and phyllite was maximal (40% by weight), as well as conditions which were used in firing temperatures of 950 °C. Regarding the prospect of savings in heat energy required to form the desired microstructure, the phyllite and the residue of kaolin, for having small particle sizes and constitutions mineralogical phases with the presence of fluxes, contributed to the optimization of the firing cycle.
Resumo:
One of the main activities in the petroleum engineering is to estimate the oil production in the existing oil reserves. The calculation of these reserves is crucial to determine the economical feasibility of your explotation. Currently, the petroleum industry is facing problems to analyze production due to the exponentially increasing amount of data provided by the production facilities. Conventional reservoir modeling techniques like numerical reservoir simulation and visualization were well developed and are available. This work proposes intelligent methods, like artificial neural networks, to predict the oil production and compare the results with the ones obtained by the numerical simulation, method quite a lot used in the practice to realization of the oil production prediction behavior. The artificial neural networks will be used due your learning, adaptation and interpolation capabilities