965 resultados para Objective measurement
Resumo:
The objective of the present study was to verify if active recovery (AR) applied after a judo match resulted in a better performance when compared to passive recovery (PR) in three tasks varying in specificity to the judo and in measurement of work performed: four upper-body Wingate tests (WT); special judo fitness test (SJFT); another match. For this purpose, three studies were conducted. Sixteen highly trained judo athletes took part in study 1, 9 in study 2, and 12 in study 3. During AR judokas ran (15 min) at the velocity corresponding to 70% of 4 mmol l(-1) blood lactate intensity (similar to 50% (V) over dotO(2) peak), while during PR they stayed seated at the competition area. The results indicated that the minimal recovery time reported in judo competitions (15 min) is long enough for sufficient recovery of WT performance and in a specific high-intensity test (SJFT). However, the odds ratio of winning a match increased ten times when a judoka performed AR and his opponent performed PR, but the cause of this phenomenon cannot be explained by changes in number of actions performed or by changes in match`s time structure.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
Recent advances in energy technology generation and new directions in electricity regulation have made distributed generation (DG) more widespread, with consequent significant impacts on the operational characteristics of distribution networks. For this reason, new methods for identifying such impacts are needed, together with research and development of new tools and resources to maintain and facilitate continued expansion towards DG. This paper presents a study aimed at determining appropriate DG sites for distribution systems. The main considerations which determine DG sites are also presented, together with an account of the advantages gained from correct DG placement. The paper intends to define some quantitative and qualitative parameters evaluated by Digsilent (R), GARP3 (R) and DSA-GD software. A multi-objective approach based on the Bellman-Zadeh algorithm and fuzzy logic is used to determine appropriate DG sites. The study also aims to find acceptable DG locations both for distribution system feeders, as well as for nodes inside a given feeder. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
In this paper, a novel wire-mesh sensor based on permittivity (capacitance) measurements is applied to generate images of the phase fraction distribution and investigate the flow of viscous oil and water in a horizontal pipe. Phase fraction values were calculated from the raw data delivered by the wire-mesh sensor using different mixture permittivity models. Furthermore, these data were validated against quick-closing valve measurements. Investigated flow patterns were dispersion of oil in water (Do/w) and dispersion of oil in water and water in oil (Do/w&w/o). The Maxwell-Garnett mixing model is better suited for Dw/o and the logarithmic model for Do/w&w/o flow pattern. Images of the time-averaged cross-sectional oil fraction distribution along with axial slice images were used to visualize and disclose some details of the flow.
Resumo:
Recently semi-empirical models to estimate flow boiling heat transfer coefficient, saturated CHF and pressure drop in micro-scale channels have been proposed. Most of the models were developed based on elongated bubbles and annular flows in the view of the fact that these flow patterns are predominant in smaller channels. In these models, the liquid film thickness plays an important role and such a fact emphasizes that the accurate measurement of the liquid film thickness is a key point to validate them. On the other hand, several techniques have been successfully applied to measure liquid film thicknesses during condensation and evaporation under macro-scale conditions. However, although this subject has been targeted by several leading laboratories around the world, it seems that there is no conclusive result describing a successful technique capable of measuring dynamic liquid film thickness during evaporation inside micro-scale round channels. This work presents a comprehensive literature review of the methods used to measure liquid film thickness in macro- and micro-scale systems. The methods are described and the main difficulties related to their use in micro-scale systems are identified. Based on this discussion, the most promising methods to measure dynamic liquid film thickness in micro-scale channels are identified. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
A way of coupling digital image correlation (to measure displacement fields) and boundary element method (to compute displacements and tractions along a crack surface) is presented herein. It allows for the identification of Young`s modulus and fracture parameters associated with a cohesive model. This procedure is illustrated to analyze the latter for an ordinary concrete in a three-point bend test on a notched beam. In view of measurement uncertainties, the results are deemed trustworthy thanks to the fact that numerous measurement points are accessible and used as entries to the identification procedure. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The objective of this research is to identify the benefits of ergonomic improvements in workstations and in planned parts supply in an automotive assembly line. Another aim is to verify to what extent it is possible to create competitive advantages in the manufacturing area with reduction in vehicle assembly time by using technological investments in ergonomics with benefits to the worker and to the company. The Methods Time Measurement (MTM) methodology is chosen to measure the process time differences. To ensure a reliable comparison, a company in Brazil that has two different types of assembly line installations in the same plant was observed, and both assembly lines were under the same influences in terms of human resources, wages, food, and educational level of the staff. In this article, the first assembly line is called ""new"" and was built 6 years ago, with high investments in ergonomic solutions, in the supply system, and in the process. The other is called ""traditional"" and was built 23 years ago with few investments in the area. (C) 2010 Wiley Periodicals, Inc.
Resumo:
Void fraction sensors are important instruments not only for monitoring two-phase flow, but for furnishing an important parameter for obtaining flow map pattern and two-phase flow heat transfer coefficient as well. This work presents the experimental results obtained with the analysis of two axially spaced multiple-electrode impedance sensors tested in an upward air-water two-phase flow in a vertical tube for void fraction measurements. An electronic circuit was developed for signal generation and post-treatment of each sensor signal. By phase shifting the electrodes supplying the signal, it was possible to establish a rotating electric field sweeping across the test section. The fundamental principle of using a multiple-electrode configuration is based on reducing signal sensitivity to the non-uniform cross-section void fraction distribution problem. Static calibration curves were obtained for both sensors, and dynamic signal analyses for bubbly, slug, and turbulent churn flows were carried out. Flow parameters such as Taylor bubble velocity and length were obtained by using cross-correlation techniques. As an application of the void fraction tested, vertical flow pattern identification could be established by using the probability density function technique for void fractions ranging from 0% to nearly 70%.
Resumo:
The elastic mechanical behavior of elastic materials is modeled by a pair of independent constants (Young`s modulus and Poisson`s coefficient). A precise measurement for both constants is necessary in some applications, such as the quality control of mechanical elements and standard materials used for the calibration of some equipment. Ultrasonic techniques have been used because wave velocity depends on the elastic properties of the propagation medium. The ultrasonic test shows better repeatability and accuracy than the tensile and indentation test. In this work, the theoretical and experimental aspects related to the ultrasonic through-transmission technique for the characterization of elastic solids is presented. Furthermore, an amorphous material and some polycrystalline materials were tested. Results have shown an excellent repeatability and numerical errors that are less than 3% in high-purity samples.
Resumo:
Three-dimensional modeling of piezoelectric devices requires a precise knowledge of piezoelectric material parameters. The commonly used piezoelectric materials belong to the 6mm symmetry class, which have ten independent constants. In this work, a methodology to obtain precise material constants over a wide frequency band through finite element analysis of a piezoceramic disk is presented. Given an experimental electrical impedance curve and a first estimate for the piezoelectric material properties, the objective is to find the material properties that minimize the difference between the electrical impedance calculated by the finite element method and that obtained experimentally by an electrical impedance analyzer. The methodology consists of four basic steps: experimental measurement, identification of vibration modes and their sensitivity to material constants, a preliminary identification algorithm, and final refinement of the material constants using an optimization algorithm. The application of the methodology is exemplified using a hard lead zirconate titanate piezoceramic. The same methodology is applied to a soft piezoceramic. The errors in the identification of each parameter are statistically estimated in both cases, and are less than 0.6% for elastic constants, and less than 6.3% for dielectric and piezoelectric constants.
Resumo:
Real-time viscosity measurement remains a necessity for highly automated industry. To resolve this problem, many studies have been carried out using an ultrasonic shear wave reflectance method. This method is based on the determination of the complex reflection coefficient`s magnitude and phase at the solid-liquid interface. Although magnitude is a stable quantity and its measurement is relatively simple and precise, phase measurement is a difficult task because of strong temperature dependence. A simplified method that uses only the magnitude of the reflection coefficient and that is valid under the Newtonian regimen has been proposed by some authors, but the obtained viscosity values do not match conventional viscometry measurements. In this work, a mode conversion measurement cell was used to measure glycerin viscosity as a function of temperature (15 to 25 degrees C) and corn syrup-water mixtures as a function of concentration (70 to 100 wt% of corn syrup). Tests were carried out at 1 MHz. A novel signal processing technique that calculates the reflection coefficient magnitude in a frequency band, instead of a single frequency, was studied. The effects of the bandwidth on magnitude and viscosity were analyzed and the results were compared with the values predicted by the Newtonian liquid model. The frequency band technique improved the magnitude results. The obtained viscosity values came close to those measured by the rotational viscometer with percentage errors up to 14%, whereas errors up to 96% were found for the single frequency method.
Resumo:
This work presents the implementation of the ultrasonic shear reflectance method for viscosity measurement of Newtonian liquids using wave mode conversion from longitudinal to shear waves and vice versa. The method is based on the measurement of the complex reflection coefficient (magnitude and phase) at a solid-liquid interface. The implemented measurement cell is composed of an ultrasonic transducer, a water buffer, an aluminum prism, a PMMA buffer rod, and a sample chamber. Viscosity measurements were made in the range from 1 to 3.5 MHz for olive oil and for automotive oils (SAE 40, 90, and 250) at 15 and 22.5 degrees C, respectively. Moreover, olive oil and corn oil measurements were conducted in the range from 15 to 30 degrees C at 3.5 and 2.25 MHz, respectively. The ultrasonic measurements, in the case of the less viscous liquids, agree with the results provided by a rotational viscometer, showing Newtonian behavior. In the case of the more viscous liquids, a significant difference was obtained, showing a clear non-Newtonian behavior that cannot be described by the Kelvin-Voigt model.
Resumo:
Nanomaterials have triggered excitement in both fundamental science and technological applications in several fields However, the same characteristic high interface area that is responsible for their unique properties causes unconventional instability, often leading to local collapsing during application Thermodynamically, this can be attributed to an increased contribution of the interface to the free energy, activating phenomena such as sintering and grain growth The lack of reliable interface energy data has restricted the development of conceptual models to allow the control of nanoparticle stability on a thermodynamic basis. Here we introduce a novel and accessible methodology to measure interface energy of nanoparticles exploiting the heat released during sintering to establish a quantitative relation between the solid solid and solid vapor interface energies. We exploited this method in MgO and ZnO nanoparticles and determined that the ratio between the solid solid and solid vapor interface energy is 11 for MgO and 0.7 for ZnO. We then discuss that this ratio is responsible for a thermodynamic metastable state that may prevent collapsing of nanoparticles and, therefore, may be used as a tool to design long-term stable nanoparticles.
Resumo:
The cost of a new ship design heavily depends on the principal dimensions of the ship; however, dimensions minimization often conflicts with the minimum oil outflow (in the event of an accidental spill). This study demonstrates one rational methodology for selecting the optimal dimensions and coefficients of form of tankers via the use of a genetic algorithm. Therein, a multi-objective optimization problem was formulated by using two objective attributes in the evaluation of each design, specifically, total cost and mean oil outflow. In addition, a procedure that can be used to balance the designs in terms of weight and useful space is proposed. A genetic algorithm was implemented to search for optimal design parameters and to identify the nondominated Pareto frontier. At the end of this study, three real ships are used as case studies. [DOI:10.1115/1.4002740]