893 resultados para power system measurement
Resumo:
This study presents a decision-making method for maintenance policy selection of power plants equipment. The method is based on risk analysis concepts. The method first step consists in identifying critical equipment both for power plant operational performance and availability based on risk concepts. The second step involves the proposal of a potential maintenance policy that could be applied to critical equipment in order to increase its availability. The costs associated with each potential maintenance policy must be estimated, including the maintenance costs and the cost of failure that measures the critical equipment failure consequences for the power plant operation. Once the failure probabilities and the costs of failures are estimated, a decision-making procedure is applied to select the best maintenance policy. The decision criterion is to minimize the equipment cost of failure, considering the costs and likelihood of occurrence of failure scenarios. The method is applied to the analysis of a lubrication oil system used in gas turbines journal bearings. The turbine has more than 150 MW nominal output, installed in an open cycle thermoelectric power plant. A design modification with the installation of a redundant oil pump is proposed for lubricating oil system availability improvement. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, a supervisor system, able to diagnose different types of faults during the operation of a proton exchange membrane fuel cell is introduced. The diagnosis is developed by applying Bayesian networks, which qualify and quantify the cause-effect relationship among the variables of the process. The fault diagnosis is based on the on-line monitoring of variables easy to measure in the machine such as voltage, electric current, and temperature. The equipment is a fuel cell system which can operate even when a fault occurs. The fault effects are based on experiments on the fault tolerant fuel cell, which are reproduced in a fuel cell model. A database of fault records is constructed from the fuel cell model, improving the generation time and avoiding permanent damage to the equipment. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Electromagnetic suspension systems are inherently nonlinear and often face hardware limitation when digitally controlled. The main contributions of this paper are: the design of a nonlinear H(infinity) controller. including dynamic weighting functions, applied to a large gap electromagnetic suspension system and the presentation of a procedure to implement this controller on a fixed-point DSP, through a methodology able to translate a floating-point algorithm into a fixed-point algorithm by using l(infinity) norm minimization due to conversion error. Experimental results are also presented, in which the performance of the nonlinear controller is evaluated specifically in the initial suspension phase. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The leaders` organizations of several different sectors have as characteristic to measure their own performance in a systematic way. However, this concept is still unusual in agricultural enterprises, including the mechanization sector. Mechanization has an important role on the production costs and to know its performance is a key factor for the agricultural enterprise success. This work was generated by the importance that measurement of performance has for the management and the mechanization impact on the production costs. Its aim is to propose an integrated performance measurement system to give support to agricultural management. The methodology was divided in two steps: adjustment of a conceptual model based on Balanced Score Card - BSC; application of the model in a study case at sugar cane mill. The adjustment and the application of the conceptual model allowed to obtain the performance index in a systematic way, that are associated to: costs and deadline ( traditionally used); control and improvement on the quality of operations and support process; environmental preservation; safety; health; employees satisfaction; development of information systems. The adjusted model helped the development of the performance measurement system for the mechanized management systems and the index allows an integrated view of the enterprise, related to its strategic objectives.
Resumo:
Time-domain reflectometry (TDR) is an important technique to obtain series of soil water content measurements in the field. Diode-segmented probes represent an improvement in TDR applicability, allowing measurements of the soil water content profile with a single probe. In this paper we explore an extensive soil water content dataset obtained by tensiometry and TDR from internal drainage experiments in two consecutive years in a tropical soil in Brazil. Comparisons between the variation patterns of the water content estimated by both methods exhibited evidences of deterioration of the TDR system during this two year period at field conditions. The results showed consistency in the variation pattern for the tensiometry data, whereas TDR estimates were inconsistent, with sensitivity decreasing over time. This suggests that difficulties may arise for the long-term use of this TDR system under tropical field conditions. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
This study determined the inter-tester and intra-tester reliability of physiotherapists measuring functional motor ability of traumatic brain injury clients using the Clinical Outcomes Variable Scale (COVS). To test inter-tester reliability, 14 physiotherapists scored the ability of 16 videotaped patients to execute the items that comprise the COVS. Intra-tester reliability was determined by four physiotherapists repeating their assessments after one week, and three months later. The intra-class correlation coefficients (ICC) were very high for both inter-tester reliability (ICC > 0.97 for total COVS scores, ICC > 0.93 for individual COVS items) and intra-tester reliability (ICC > 0.97). This study demonstrates that physiotherapists are reliable in the administration of the COVS.
Resumo:
The linear relationship between work accomplished (W-lim) and time to exhaustion (t(lim)) can be described by the equation: W-lim = a + CP.t(lim). Critical power (CP) is the slope of this line and is thought to represent a maximum rate of ATP synthesis without exhaustion, presumably an inherent characteristic of the aerobic energy system. The present investigation determined whether the choice of predictive tests would elicit significant differences in the estimated CP. Ten female physical education students completed, in random order and on consecutive days, five art-out predictive tests at preselected constant-power outputs. Predictive tests were performed on an electrically-braked cycle ergometer and power loadings were individually chosen so as to induce fatigue within approximately 1-10 mins. CP was derived by fitting the linear W-lim-t(lim) regression and calculated three ways: 1) using the first, third and fifth W-lim-t(lim) coordinates (I-135), 2) using coordinates from the three highest power outputs (I-123; mean t(lim) = 68-193 s) and 3) using coordinates from the lowest power outputs (I-345; mean t(lim) = 193-485 s). Repeated measures ANOVA revealed that CPI123 (201.0 +/- 37.9W) > CPI135 (176.1 +/- 27.6W) > CPI345 (164.0 +/- 22.8W) (P < 0.05). When the three sets of data were used to fit the hyperbolic Power-t(lim) regression, statistically significant differences between each CP were also found (P < 0.05). The shorter the predictive trials, the greater the slope of the W-lim-t(lim) regression; possibly because of the greater influence of 'aerobic inertia' on these trials. This may explain why CP has failed to represent a maximal, sustainable work rate. The present findings suggest that if CP is to represent the highest power output that an individual can maintain for a very long time without fatigue then CP should be calculated over a range of predictive tests in which the influence of aerobic inertia is minimised.
Resumo:
The use of computational fluid dynamics simulations for calibrating a flush air data system is described, In particular, the flush air data system of the HYFLEX hypersonic vehicle is used as a case study. The HYFLEX air data system consists of nine pressure ports located flush with the vehicle nose surface, connected to onboard pressure transducers, After appropriate processing, surface pressure measurements can he converted into useful air data parameters. The processing algorithm requires an accurate pressure model, which relates air data parameters to the measured pressures. In the past, such pressure models have been calibrated using combinations of flight data, ground-based experimental results, and numerical simulation. We perform a calibration of the HYFLEX flush air data system using computational fluid dynamics simulations exclusively, The simulations are used to build an empirical pressure model that accurately describes the HYFLEX nose pressure distribution ol cr a range of flight conditions. We believe that computational fluid dynamics provides a quick and inexpensive way to calibrate the air data system and is applicable to a broad range of flight conditions, When tested with HYFLEX flight data, the calibrated system is found to work well. It predicts vehicle angle of attack and angle of sideslip to accuracy levels that generally satisfy flight control requirements. Dynamic pressure is predicted to within the resolution of the onboard inertial measurement unit. We find that wind-tunnel experiments and flight data are not necessary to accurately calibrate the HYFLEX flush air data system for hypersonic flight.
Resumo:
This is the first paper in a study on the influence of the environment on the crack tip strain field for AISI 4340. A stressing stage for the environmental scanning electron microscope (ESEM) was constructed which was capable of applying loads up to 60 kN to fracture-mechanics samples. The measurement of the crack tip strain field required preparation (by electron lithography or chemical etching) of a system of reference points spaced at similar to 5 mu m intervals on the sample surface, loading the sample inside an electron microscope, image processing procedures to measure the displacement at each reference point and calculation of the strain field. Two algorithms to calculate strain were evaluated. Possible sources of errors were calculation errors due to the algorithm, errors inherent in the image processing procedure and errors due to the limited precision of the displacement measurements. Estimation of the contribution of each source of error was performed. The technique allows measurement of the crack tip strain field over an area of 50 x 40 mu m with a strain precision better than +/- 0.02 at distances larger than 5 mu m from the crack tip. (C) 1999 Kluwer Academic Publishers.
Resumo:
The purpose of the present investigation was to gain an understanding of the nature of the carbon contamination on the surface of standard steel transmission electron spectroscopy (TEM) specimens, the effect of exposure of a clean specimen to normal laboratory air, and the efficacy of plasma-cleaning treatments. This knowledge is a necessary prerequisite to the development of appropriate specimen preparation and/or specimen cleaning methods. X-ray photoelectron spectroscopy in combination with argon ion beam profiling was used to characterize the specimen surfaces of X65 steel and 316 stainless steel. The only clean carbon-free surface obtained was that during argon etching of the sample in the surface analysis chamber. Any exposure of a previously cleaned sample to laboratory air resulted in a rapid carbon (hydrocarbon) contamination of the sample surface and the development of surface oxidation, Plasma cleaning with subsequent exposure of the specimen to the laboratory air also resulted in a carbon-contaminated surface. This suggests that procedures of preparation of TEM specimens of steels outside an ultrahigh vacuum chamber are unlikely to result in the lowering of contamination rates on specimens to levels where measurements for carbon in the grain boundaries are possible. What is needed is a cleaning system as an integral part of the specimen insertion system into the field-emission scanning transmission electron microscope. This cleaning could be carried out by argon ion etching. Copyright (C) 2000 John Wiley & Sons, Ltd.
Resumo:
We present a method for measuring single spins embedded in a solid by probing two-electron systems with a single-electron transistor (SET). Restrictions imposed by the Pauli principle on allowed two-electron states mean that the spin state of such systems has a profound impact on the orbital states (positions) of the electrons, a parameter which SET's are extremely well suited to measure. We focus on a particular system capable of being fabricated with current technology: a Te double donor in Si adjacent to a Si/SiO2, interface and lying directly beneath the SET island electrode, and we outline a measurement strategy capable of resolving single-electron and nuclear spins in this system. We discuss the limitations of the measurement imposed by spin scattering arising from fluctuations emanating from the SET and from lattice phonons. We conclude that measurement of single spins, a necessary requirement for several proposed quantum computer architectures, is feasible in Si using this strategy.
Resumo:
We consider continuous observation of the nonlinear dynamics of single atom trapped in an optical cavity by a standing wave with intensity modulation. The motion of the atom changes the phase of the field which is then monitored by homodyne detection of the output field. We show that the conditional Hilbert space dynamics of this system, subject to measurement-induced perturbations, depends strongly on whether the corresponding classical dynamics is regular or chaotic. If the classical dynamics is chaotic, the distribution of conditional Hilbert space vectors corresponding to different observation records tends to be orthogonal. This is a characteristic feature of hypersensitivity to perturbation for quantum chaotic systems.
Resumo:
Current theoretical thinking about dual processes in recognition relies heavily on the measurement operations embodied within the process dissociation procedure. We critically evaluate the ability of this procedure to support this theoretical enterprise. We show that there are alternative processes that would produce a rough invariance in familiarity (a key prediction of the dual-processing approach) and that the process dissociation procedure does not have the power to differentiate between these alternative possibilities. We also show that attempts to relate parameters estimated by the process dissociation procedure to subjective reports (remember-know judgments) cannot differentiate between alternative dual-processing models and that there are problems with some of the historical evidence and with obtaining converging evidence. Our conclusion is that more specific theories incorporating ideas about representation and process are required.
Resumo:
The object of this article is to estimate demand elasticities for a basket of staple food important for providing the caloric needs of Brazilian households. These elasticities are useful in the measurement of the impact of structural reforms on poverty. A two-stage demand system was constructed, based on data from Household Expenditure Surveys (POF) produced by IBGE (The Brazilian Bureau of Statistics) in 1987/88 and 1995/96. We have used panel data to estimate the model, and have calculated income, own-price, and cross-price elasticities for eight groups of goods and services and, in the second stage, for 11 sub groups of staple food products. We estimated those elasticities for the whole sample of consumers and for two income groups.
Resumo:
Background: A major goal in the post-genomic era is to identify and characterise disease susceptibility genes and to apply this knowledge to disease prevention and treatment. Rodents and humans have remarkably similar genomes and share closely related biochemical, physiological and pathological pathways. In this work we utilised the latest information on the mouse transcriptome as revealed by the RIKEN FANTOM2 project to identify novel human disease-related candidate genes. We define a new term patholog to mean a homolog of a human disease-related gene encoding a product ( transcript, anti-sense or protein) potentially relevant to disease. Rather than just focus on Mendelian inheritance, we applied the analysis to all potential pathologs regardless of their inheritance pattern. Results: Bioinformatic analysis and human curation of 60,770 RIKEN full-length mouse cDNA clones produced 2,578 sequences that showed similarity ( 70 - 85% identity) to known human-disease genes. Using a newly developed biological information extraction and annotation tool ( FACTS) in parallel with human expert analysis of 17,051 MEDLINE scientific abstracts we identified 182 novel potential pathologs. Of these, 36 were identified by computational tools only, 49 by human expert analysis only and 97 by both methods. These pathologs were related to neoplastic ( 53%), hereditary ( 24%), immunological ( 5%), cardio-vascular (4%), or other (14%), disorders. Conclusions: Large scale genome projects continue to produce a vast amount of data with potential application to the study of human disease. For this potential to be realised we need intelligent strategies for data categorisation and the ability to link sequence data with relevant literature. This paper demonstrates the power of combining human expert annotation with FACTS, a newly developed bioinformatics tool, to identify novel pathologs from within large-scale mouse transcript datasets.