973 resultados para Roundoff errors.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper develops H(infinity) control designs based on neural networks for fully actuated and underactuated cooperative manipulators. The neural networks proposed in this paper only adapt the uncertain dynamics of the robot manipulators. They work as a complement of the nominal model. The H(infinity) performance index includes the position errors as well the squeeze force errors between the manipulator end-effectors and the object, which represents a complete disturbance rejection scenario. For the underactuated case, the squeeze force control problem is more difficult to solve due to the loss of some degrees of manipulator actuation. Results obtained from an actual cooperative manipulator, which is able to work as a fully actuated and an underactuated manipulator, are presented. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents results on a verification test of a Direct Numerical Simulation code of mixed high-order of accuracy using the method of manufactured solutions (MMS). This test is based on the formulation of an analytical solution for the Navier-Stokes equations modified by the addition of a source term. The present numerical code was aimed at simulating the temporal evolution of instability waves in a plane Poiseuille flow. The governing equations were solved in a vorticity-velocity formulation for a two-dimensional incompressible flow. The code employed two different numerical schemes. One used mixed high-order compact and non-compact finite-differences from fourth-order to sixth-order of accuracy. The other scheme used spectral methods instead of finite-difference methods for the streamwise direction, which was periodic. In the present test, particular attention was paid to the boundary conditions of the physical problem of interest. Indeed, the verification procedure using MMS can be more demanding than the often used comparison with Linear Stability Theory. That is particularly because in the latter test no attention is paid to the nonlinear terms. For the present verification test, it was possible to manufacture an analytical solution that reproduced some aspects of an instability wave in a nonlinear stage. Although the results of the verification by MMS for this mixed-order numerical scheme had to be interpreted with care, the test was very useful as it gave confidence that the code was free of programming errors. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrodynamic journal bearings are susceptible to static angular misalignment, resulting from improper assemblage, elastic and thermal distortion of the shaft and bearing housing, and also manufacturing errors. Several previous works on the theme, both theoretical and experimental, focused on the determination of the static properties of angular misaligned bearings. Although some reports show agreement between theoretical and experimental results, the increasingly severe operating conditions of hydrodynamic bearings (heavy loads and high rotational speeds) require more reliable theoretical formulations for the evaluation of the journal performance during the design process. The consideration of the angular misalignment in the derivation of the Reynolds equation is presented here in detail, showing that properly conducted geometric and magnitude-order analyses lead to the inclusion of an axial wedge effect term that influences the velocity and pressure fields in the lubricant film. Numerical results evidence that this axial wedge effect more significantly affects the hydrodynamic forces and static operational properties of tilted short journal bearings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A methodology for rock-excavation structural-reliability analysis that uses Distinct Element Method numerical models is presented. The methodology solves the problem of the conventional numerical models that supply only punctual results and use fixed input parameters, without considering its statistical errors. The analysis of rock-excavation stability must consider uncertainties from geological variability, from uncertainty in the choice of mechanical behaviour hypothesis, and from uncertainties in parameters adopted in numerical model construction. These uncertainties can be analyzed in simple deterministic models, but a new methodology was developed for numerical models with results of several natures. The methodology is based on Monte Carlo simulations and uses principles of Paraconsistent Logic. It will be presented in the analysis of a final slope of a large-dimensioned surface mine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The elastic mechanical behavior of elastic materials is modeled by a pair of independent constants (Young`s modulus and Poisson`s coefficient). A precise measurement for both constants is necessary in some applications, such as the quality control of mechanical elements and standard materials used for the calibration of some equipment. Ultrasonic techniques have been used because wave velocity depends on the elastic properties of the propagation medium. The ultrasonic test shows better repeatability and accuracy than the tensile and indentation test. In this work, the theoretical and experimental aspects related to the ultrasonic through-transmission technique for the characterization of elastic solids is presented. Furthermore, an amorphous material and some polycrystalline materials were tested. Results have shown an excellent repeatability and numerical errors that are less than 3% in high-purity samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three-dimensional modeling of piezoelectric devices requires a precise knowledge of piezoelectric material parameters. The commonly used piezoelectric materials belong to the 6mm symmetry class, which have ten independent constants. In this work, a methodology to obtain precise material constants over a wide frequency band through finite element analysis of a piezoceramic disk is presented. Given an experimental electrical impedance curve and a first estimate for the piezoelectric material properties, the objective is to find the material properties that minimize the difference between the electrical impedance calculated by the finite element method and that obtained experimentally by an electrical impedance analyzer. The methodology consists of four basic steps: experimental measurement, identification of vibration modes and their sensitivity to material constants, a preliminary identification algorithm, and final refinement of the material constants using an optimization algorithm. The application of the methodology is exemplified using a hard lead zirconate titanate piezoceramic. The same methodology is applied to a soft piezoceramic. The errors in the identification of each parameter are statistically estimated in both cases, and are less than 0.6% for elastic constants, and less than 6.3% for dielectric and piezoelectric constants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Real-time viscosity measurement remains a necessity for highly automated industry. To resolve this problem, many studies have been carried out using an ultrasonic shear wave reflectance method. This method is based on the determination of the complex reflection coefficient`s magnitude and phase at the solid-liquid interface. Although magnitude is a stable quantity and its measurement is relatively simple and precise, phase measurement is a difficult task because of strong temperature dependence. A simplified method that uses only the magnitude of the reflection coefficient and that is valid under the Newtonian regimen has been proposed by some authors, but the obtained viscosity values do not match conventional viscometry measurements. In this work, a mode conversion measurement cell was used to measure glycerin viscosity as a function of temperature (15 to 25 degrees C) and corn syrup-water mixtures as a function of concentration (70 to 100 wt% of corn syrup). Tests were carried out at 1 MHz. A novel signal processing technique that calculates the reflection coefficient magnitude in a frequency band, instead of a single frequency, was studied. The effects of the bandwidth on magnitude and viscosity were analyzed and the results were compared with the values predicted by the Newtonian liquid model. The frequency band technique improved the magnitude results. The obtained viscosity values came close to those measured by the rotational viscometer with percentage errors up to 14%, whereas errors up to 96% were found for the single frequency method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work describes the use of a large-aperture PVDF receiver in the measurement of liquid density and composite material elastic constants. The density measurement of several liquids is obtained with accuracy of 0.2% using a conventional NDE emitter transducer and a 70-mm-diameter, 52-mu m P(VDF-TrFE) membrane with gold electrodes. The determination of the elastic constants is based on the phase velocity measurement. Diffraction can lead to errors around 1% in velocity measurement when using alternatively the conventional pair of ultrasonic transducers (1-MHz frequency and 19-mm-diameter) operating in through-transmission mode, separated by a distance of 100 mm. This effect is negligible when using a pair of 10-MHz, 19-mm-diameter transducers. Nevertheless, the dispersion at 10 MHz can result in errors of about 0.5%, when measuring the velocity in composite materials. The use of an 80-mm diameter, 52-mu m-thick PVDF membrane receiver practically eliminates the diffraction effects in phase velocity measurement. The elastic constants of a carbon fiber reinforced polymer were determined and compared with the values obtained by a tensile test. (C) 2009 Elsevier B. V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper deals with the problem of tracking target sets using a model predictive control (MPC) law. Some MPC applications require a control strategy in which some system outputs are controlled within specified ranges or zones (zone control), while some other variables - possibly including input variables - are steered to fixed target or set-point. In real applications, this problem is often overcome by including and excluding an appropriate penalization for the output errors in the control cost function. In this way, throughout the continuous operation of the process, the control system keeps switching from one controller to another, and even if a stabilizing control law is developed for each of the control configurations, switching among stable controllers not necessarily produces a stable closed loop system. From a theoretical point of view, the control objective of this kind of problem can be seen as a target set (in the output space) instead of a target point, since inside the zones there are no preferences between one point or another. In this work, a stable MPC formulation for constrained linear systems, with several practical properties is developed for this scenario. The concept of distance from a point to a set is exploited to propose an additional cost term, which ensures both, recursive feasibility and local optimality. The performance of the proposed strategy is illustrated by simulation of an ill-conditioned distillation column. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an analysis of the performance of a baseband multiple-input single-output (MISO) time reversal ultra-wideband system (TR-UWB) incorporating a symbol spaced decision feedback equalizer (DFE). A semi-analytical performance analysis based on a Gaussian approach is considered, which matched well with simulation results, even for the DFE case. The channel model adopted is based on the IEEE 802.15.3a model, considering correlated shadowing across antenna elements. In order to provide a more realistic analysis, channel estimation errors are considered for the design of the TR filter. A guideline for the choice of equalizer length is provided. The results show that the system`s performance improves with an increase in the number of transmit antennas and when a symbol spaced equalizer is used with a relatively small number of taps compared to the number of resolvable paths in the channel impulse response. Moreover, it is possible to conclude that due to the time reversal scheme, the error propagation in the DFE does not play a role in the system`s performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, a wide analysis of local search multiuser detection (LS-MUD) for direct sequence/code division multiple access (DS/CDMA) systems under multipath channels is carried out considering the performance-complexity trade-off. It is verified the robustness of the LS-MUD to variations in loading, E(b)/N(0), near-far effect, number of fingers of the Rake receiver and errors in the channel coefficients estimates. A compared analysis of the bit error rate (BER) and complexity trade-off is accomplished among LS, genetic algorithm (GA) and particle swarm optimization (PSO). Based on the deterministic behavior of the LS algorithm, it is also proposed simplifications over the cost function calculation, obtaining more efficient algorithms (simplified and combined LS-MUD versions) and creating new perspectives for the MUD implementation. The computational complexity is expressed in terms of the number of operations in order to converge. Our conclusion pointed out that the simplified LS (s-LS) method is always more efficient, independent of the system conditions, achieving a better performance with a lower complexity than the others heuristics detectors. Associated to this, the deterministic strategy and absence of input parameters made the s-LS algorithm the most appropriate for the MUD problem. (C) 2008 Elsevier GmbH. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important topic in genomic sequence analysis is the identification of protein coding regions. In this context, several coding DNA model-independent methods based on the occurrence of specific patterns of nucleotides at coding regions have been proposed. Nonetheless, these methods have not been completely suitable due to their dependence on an empirically predefined window length required for a local analysis of a DNA region. We introduce a method based on a modified Gabor-wavelet transform (MGWT) for the identification of protein coding regions. This novel transform is tuned to analyze periodic signal components and presents the advantage of being independent of the window length. We compared the performance of the MGWT with other methods by using eukaryote data sets. The results show that MGWT outperforms all assessed model-independent methods with respect to identification accuracy. These results indicate that the source of at least part of the identification errors produced by the previous methods is the fixed working scale. The new method not only avoids this source of errors but also makes a tool available for detailed exploration of the nucleotide occurrence.