968 resultados para MEASUREMENT UNCERTAINTY
Resumo:
Distributed fibre sensors provide unique capabilities for monitoring large infrastructures with high resolution. Practically, all these sensors are based on some kind of backscattering interaction. A pulsed activating signal is launched on one side of the sensing fibre and the backscattered signal is read as a function of the time of flight of the pulse along the fibre. A key limitation in the measurement range of all these sensors is introduced by fibre attenuation. As the pulse travels along the fibre, the losses in the fibre cause a drop of signal contrast and consequently a growth in the measurement uncertainty. In typical single-mode fibres, attenuation imposes a range limit of less than 30km, for resolutions in the order of 1-2 meters. An interesting improvement in this performance can be considered by using distributed amplification along the fibre [1]. Distributed amplification allows having a more homogeneous signal power along the sensing fibre, which also enables reducing the signal power at the input and therefore avoiding nonlinearities. However, in long structures (≥ 50 km), plain distributed amplification does not perfectly compensate the losses and significant power variations along the fibre are to be expected, leading to inevitable limitations in the measurements. From this perspective, it is simple to understand intuitively that the best possible solution for distributed sensors would be offered by a virtually transparent fibre, i.e. a fibre exhibiting effectively zero attenuation in the spectral region of the pulse. In addition, it can be shown that lossless transmission is the working point that allows the minimization of the amplified spontaneous emission (ASE) noise build-up. © 2011 IEEE.
Resumo:
We have proposed a similarity matching method (SMM) to obtain the change of Brillouin frequency shift (BFS), in which the change of BFS can be determined from the frequency difference between detecting spectrum and selected reference spectrum by comparing their similarity. We have also compared three similarity measures in the simulation, which has shown that the correlation coefficient is more accurate to determine the change of BFS. Compared with the other methods of determining the change of BFS, the SMM is more suitable for complex Brillouin spectrum profiles. More precise result and much faster processing speed have been verified in our simulation and experiments. The experimental results have shown that the measurement uncertainty of the BFS has been improved to 0.72 MHz by using the SMM, which is almost one-third of that by using the curve fitting method, and the speed of deriving the BFS change by the SMM is 120 times faster than that by the curve fitting method.
Resumo:
Anthropogenic carbon dioxide (CO2) emissions are reducing the pH in the world's oceans. The plankton community is a key component driving biogeochemical fluxes, and the effect of increased CO2 on plankton is critical for understanding the ramifications of ocean acidification on global carbon fluxes. We determined the plankton community composition and measured primary production, respiration rates and carbon export (defined here as carbon sinking out of a shallow, coastal area) during an ocean acidification experiment. Mesocosms (~ 55 m3) were set up in the Baltic Sea with a gradient of CO2 levels initially ranging from ambient (~ 240 µatm), used as control, to high CO2 (up to ~ 1330 µatm). The phytoplankton community was dominated by dinoflagellates, diatoms, cyanobacteria and chlorophytes, and the zooplankton community by protozoans, heterotrophic dinoflagellates and cladocerans. The plankton community composition was relatively homogenous between treatments. Community respiration rates were lower at high CO2 levels. The carbon-normalized respiration was approximately 40 % lower in the high CO2 environment compared with the controls during the latter phase of the experiment. We did not, however, detect any effect of increased CO2 on primary production. This could be due to measurement uncertainty, as the measured total particular carbon (TPC) and combined results presented in this special issue suggest that the reduced respiration rate translated into higher net carbon fixation. The percent carbon derived from microscopy counts (both phyto- and zooplankton), of the measured total particular carbon (TPC) decreased from ~ 26 % at t0 to ~ 8 % at t31, probably driven by a shift towards smaller plankton (< 4 µm) not enumerated by microscopy. Our results suggest that reduced respiration lead to increased net carbon fixation at high CO2. However, the increased primary production did not translate into increased carbon export, and did consequently not work as a negative feedback mechanism for increasing atmospheric CO2 concentration.
Resumo:
This study aims to evaluate the uncertainty associated with measurements made by aneroid sphygmomanometer, neonatal electronic balance and electrocautery. Therefore, were performing repeatability tests on all devices for the subsequent execution of normality tests using Shapiro-Wilk; identification of influencing factors that affect the measurement result of each measurement; proposition of mathematical models to calculate the measurement uncertainty associated with measuring evaluated for all equipament and calibration for neonatal electronic balance; evaluation of the measurement uncertainty; and development of a computer program in Java language to systematize the calibration uncertainty of estimates and measurement uncertainty. It was proposed and carried out 23 factorial design for aneroid sphygmomanometer order to investigate the effect of temperature factors, patient and operator and another 32 planning for electrocautery, where it investigated the effects of temperature factors and output electrical power. The expanded uncertainty associated with the measurement of blood pressure significantly reduced the extent of the patient classification tracks. In turn, the expanded uncertainty associated with the mass measurement with neonatal balance indicated a variation of about 1% in the dosage of medication to neonates. Analysis of variance (ANOVA) and the Turkey test indicated significant and indirectly proportional effects of temperature factor in cutting power values and clotting indicated by electrocautery and no significant effect of factors investigated for aneroid sphygmomanometer.
Resumo:
The off-cycle refrigerant mass migration has a direct influence on the on-cycle performance since compressor energy is necessary to redistribute the refrigerant mass. No studies, as of today, are available in the open literature which experimentally measured the lubricant migration within a refrigeration system during cycling or stop/start transients. Therefore, experimental procedures measuring the refrigerant and lubricant migration through the major components of a refrigeration system during stop/start transients were developed and implemented. Results identifying the underlying physics are presented. The refrigerant and lubricant migration of an R134a automotive A/C system-utilizing a fixed orifice tube, minichannel condenser, plate and fin evaporator, U-tube type accumulator and fixed displacement compressor-was measured across five sections divided by ball valves. Using the Quick-Closing Valve Technique (QCVT) combined with the Remove and Weigh Technique (RWT) using liquid nitrogen as the condensing agent resulted in a measurement uncertainty of 0.4 percent regarding the total refrigerant mass in the system. The determination of the lubricant mass distribution was achieved by employing three different techniques-Remove and Weigh, Mix and Sample, and Flushing. To employ the Mix and Sample Technique a device-called the Mix and Sample Device-was built. A method to separate the refrigerant and lubricant was developed with an accuracy-after separation-of 0.04 grams of refrigerant left in the lubricant. When applying the three techniques, the total amount of lubricant mass in the system was determined to within two percent. The combination of measurement results-infrared photography and high speed and real time videography-provide unprecedented insight into the mechanisms of refrigerant and lubricant migration during stop-start operation. During the compressor stop period, the primary refrigerant mass migration is caused by, and follows, the diminishing pressure difference across the expansion device. The secondary refrigerant migration is caused by a pressure gradient as a result of thermal nonequilibrium within the system and causes only vapor phase refrigerant migration. Lubricant migration is proportional to the refrigerant mass during the primary refrigerant mass migration. During the secondary refrigerant mass migration lubricant is not migrating. The start-up refrigerant mass migration is caused by an imbalance of the refrigerant mass flow rates across the compressor and expansion device. The higher compressor refrigerant mass flow rate was a result of the entrainment of foam into the U-tube of the accumulator. The lubricant mass migration during the start-up was not proportional to the refrigerant mass migration. The presence of water condensate on the evaporator affected the refrigerant mass migration during the compressor stop period. Caused by an evaporative cooling effect the evaporator held 56 percent of the total refrigerant mass in the system after three minutes of compressor stop time-compared to 25 percent when no water condensate was present on the evaporator coil. Foam entrainment led to a faster lubricant and refrigerant mass migration out of the accumulator than liquid entrainment through the hole at the bottom of the U-tube. The latter was observed for when water condensate was present on the evaporator coil because-as a result of the higher amount of refrigerant mass in the evaporator before start-up-the entrainment of foam into the U-tube of the accumulator ceased before the steady state refrigerant mass distribution was reached.
Resumo:
A decision-maker, when faced with a limited and fixed budget to collect data in support of a multiple attribute selection decision, must decide how many samples to observe from each alternative and attribute. This allocation decision is of particular importance when the information gained leads to uncertain estimates of the attribute values as with sample data collected from observations such as measurements, experimental evaluations, or simulation runs. For example, when the U.S. Department of Homeland Security must decide upon a radiation detection system to acquire, a number of performance attributes are of interest and must be measured in order to characterize each of the considered systems. We identified and evaluated several approaches to incorporate the uncertainty in the attribute value estimates into a normative model for a multiple attribute selection decision. Assuming an additive multiple attribute value model, we demonstrated the idea of propagating the attribute value uncertainty and describing the decision values for each alternative as probability distributions. These distributions were used to select an alternative. With the goal of maximizing the probability of correct selection we developed and evaluated, under several different sets of assumptions, procedures to allocate the fixed experimental budget across the multiple attributes and alternatives. Through a series of simulation studies, we compared the performance of these allocation procedures to the simple, but common, allocation procedure that distributed the sample budget equally across the alternatives and attributes. We found the allocation procedures that were developed based on the inclusion of decision-maker knowledge, such as knowledge of the decision model, outperformed those that neglected such information. Beginning with general knowledge of the attribute values provided by Bayesian prior distributions, and updating this knowledge with each observed sample, the sequential allocation procedure performed particularly well. These observations demonstrate that managing projects focused on a selection decision so that the decision modeling and the experimental planning are done jointly, rather than in isolation, can improve the overall selection results.
Resumo:
O desenvolvimento de métodos adequados que permitam o monitoramento de resíduos e contaminantes em alimentos é de suma importância pois é a única forma de garantir a segurança dos alimentos evitando danos à saúde do consumidor. Para isso, fazse necessário que estes métodos sejam rápidos, fáceis e de baixo custo, capazes de detectar a presença de resíduos em concentrações baixas e em diferentes matrizes. Este trabalho consistiu no desenvolvimento de método para determinação de 5 sedativos e 14 β-bloqueadores em amostras de rim suíno e posterior análise por Cromatografia Líquida Acoplada à Espectrometria de Massas em Série (LC-MS/MS). O procedimento de extração que melhor se adequou para análise destes compostos consistiu na pesagem de 2 g de amostra e adição de 10 mL de acetonitrila seguida de homogeneização com auxílio de Ultra-Turrax e mesa agitadora. Após extração, as amostras foram submetidas a duas técnicas de clean-up, sendo elas, congelamento do extrato à baixa temperatura e extração em fase sólida dispersiva (d-SPE) utilizando como sorvente Celite® 545. Uma etapa de concentração foi realizada com auxílio de concentrador de amostras sob fluxo de N2 e temperatura controlada. As amostras secas foram retomadas com metanol e analisadas utilizando sistema LC-MS/MS com Ionização por Eletrospray (ESI), operando no modo MRM positivo, coluna Poroshell 120 EC-C18 (3,0 x 50 mm, 2,7 μm) para separação dos analitos, e gradiente de fase móvel composta por (A) solução aquosa acidificada com 0,1% de ácido fórmico (v/v) e (B) metanol 0,1% ácido fórmico (v/v). Os parâmetros de validação avaliados foram linearidade, seletividade, efeito matriz, precisão, veracidade, recuperação, limite de decisão, capacidade de detecção, incerteza da medição, robustez, limite de detecção e de quantificação. Além disso foram observados os critérios de desempenho aplicáveis à detecção por espectrometria de massas e estabilidade dos compostos. A recuperação foi avaliada em 10 μg kg-1 e a veracidade em 5, 10 e 15 μg kg-1 apresentando resultados satisfatórios entre 70 - 85% e 90 - 101%, respectivamente. O limite de quantificação determinado foi de 2,5 μg kg-1 , exceto para carazolol que foi de 1,25 μg kg- 1 . O estudo de linearidade foi realizado entre 0 e 20 μg kg-1 apresentando coeficientes de determinação superiores a 0,98. Estes procedimentos foram realizados através de análise de matriz branca fortificada. Além disso, o presente método foi utilizado para analisar carazolol, azaperone e azaperol em amostras de ensaio colaborativo de rim suíno, apresentando resultados muito próximos aos reais. Portanto, é possível concluir que o método desenvolvido é adequado para análise de sedativos e β-bloqueadores através de extração dos compostos e limpeza do extrato eficientes utilizando procedimentos rápidos, fáceis e de baixo custo, garantindo resultados seguros e confiáveis.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Química, 2016.
Resumo:
This paper compares the performance of the complex nonlinear least squares algorithm implemented in the LEVM/LEVMW software with the performance of a genetic algorithm in the characterization of an electrical impedance of known topology. The effect of the number of measured frequency points and of measurement uncertainty on the estimation of circuit parameters is presented. The analysis is performed on the equivalent circuit impedance of a humidity sensor.
Resumo:
The effects of particulate matter on environment and public health have been widely studied in recent years. A number of studies in the medical field have tried to identify the specific effect on human health of particulate exposure, but agreement amongst these studies on the relative importance of the particles’ size and its origin with respect to health effects is still lacking. Nevertheless, air quality standards are moving, as the epidemiological attention, towards greater focus on the smaller particles. Current air quality standards only regulate the mass of particulate matter less than 10 μm in aerodynamic diameter (PM10) and less than 2.5 μm (PM2.5). The most reliable method used in measuring Total Suspended Particles (TSP), PM10, PM2.5 and PM1 is the gravimetric method since it directly measures PM concentration, guaranteeing an effective traceability to international standards. This technique however, neglects the possibility to correlate short term intra-day variations of atmospheric parameters that can influence ambient particle concentration and size distribution (emission strengths of particle sources, temperature, relative humidity, wind direction and speed and mixing height) as well as human activity patterns that may also vary over time periods considerably shorter than 24 hours. A continuous method to measure the number size distribution and total number concentration in the range 0.014 – 20 μm is the tandem system constituted by a Scanning Mobility Particle Sizer (SMPS) and an Aerodynamic Particle Sizer (APS). In this paper, an uncertainty budget model of the measurement of airborne particle number, surface area and mass size distributions is proposed and applied for several typical aerosol size distributions. The estimation of such an uncertainty budget presents several difficulties due to i) the complexity of the measurement chain, ii) the fact that SMPS and APS can properly guarantee the traceability to the International System of Measurements only in terms of number concentration. In fact, the surface area and mass concentration must be estimated on the basis of separately determined average density and particle morphology. Keywords: SMPS-APS tandem system, gravimetric reference method, uncertainty budget, ultrafine particles.
Resumo:
A growing literature considers the impact of uncertainty using SVAR models that include proxies for uncertainty shocks as endogenous variables. In this paper we consider the impact of measurement error in these proxies on the estimated impulse responses. We show via a Monte-Carlo experiment that measurement error can result in attenuation bias in impulse responses. In contrast, the proxy SVAR that uses the uncertainty shock proxy as an instrument does not su¤er from this bias. Applying this latter method to the Bloom (2009) data-set results in impulse responses to uncertainty shocks that are larger in magnitude and more persistent than those obtained from a recursive SVAR.
Resumo:
The uncertainty on the calorimeter energy response to jets of particles is derived for the ATLAS experiment at the Large Hadron Collider (LHC). First, the calorimeter response to single isolated charged hadrons is measured and compared to the Monte Carlo simulation using proton-proton collisions at centre-of-mass energies of root s = 900 GeV and 7 TeV collected during 2009 and 2010. Then, using the decay of K-s and Lambda particles, the calorimeter response to specific types of particles (positively and negatively charged pions, protons, and anti-protons) is measured and compared to the Monte Carlo predictions. Finally, the jet energy scale uncertainty is determined by propagating the response uncertainty for single charged and neutral particles to jets. The response uncertainty is 2-5 % for central isolated hadrons and 1-3 % for the final calorimeter jet energy scale.
Resumo:
This paper describes a method of uncertainty evaluation for axi-symmetric measurement machines which is compliant with GUM and PUMA methodologies. Specialized measuring machines for the inspection of axisymmetric components enable the measurement of properties such as roundness (radial runout), axial runout and coning. These machines typically consist of a rotary table and a number of contact measurement probes located on slideways. Sources of uncertainty include the probe calibration process, probe repeatability, probe alignment, geometric errors in the rotary table, the dimensional stability of the structure holding the probes and form errors in the reference hemisphere which is used to calibrate the system. The generic method is described and an evaluation of an industrial machine is described as a worked example. Type A uncertainties were obtained from a repeatability study of the probe calibration process, a repeatability study of the actual measurement process, a system stability test and an elastic deformation test. Type B uncertainties were obtained from calibration certificates and estimates. Expanded uncertainties, at 95% confidence, were then calculated for the measurement of; radial runout (1.2 µm with a plunger probe or 1.7 µm with a lever probe); axial runout (1.2 µm with a plunger probe or 1.5 µm with a lever probe); and coning/swash (0.44 arc seconds with a plunger probe or 0.60 arc seconds with a lever probe).
Resumo:
This paper details a method of estimating the uncertainty of dimensional measurement for a three-dimensional coordinate measurement machine. An experimental procedure was developed to compare three-dimensional coordinate measurements with calibrated reference points. The reference standard used to calibrate these reference points was a fringe counting interferometer with a multilateration-like technique employed to establish three-dimensional coordinates. This is an extension of the established technique of comparing measured lengths with calibrated lengths. Specifically a distributed coordinate measurement device was tested which consisted of a network of Rotary-Laser Automatic Theodolites (R-LATs), this system is known commercially as indoor GPS (iGPS). The method was found to be practical and was used to estimate that the uncertainty of measurement for the basic iGPS system is approximately 1 mm at a 95% confidence level throughout a measurement volume of approximately 10 m × 10 m × 1.5 m. © 2010 IOP Publishing Ltd.
Resumo:
This paper shows how the angular uncertainties can be determined for a rotary-laser automatic theodolite of the type used in (indoor-GPS) iGPS networks. Initially, the fundamental physics of the rotating head device is used to propagate uncertainties using Monte Carlo simulation. This theoretical element of the study shows how the angular uncertainty is affected by internal parameters, the actual values of which are estimated. Experiments are then carried out to determine the actual uncertainty in the azimuth angle. Results are presented that show that uncertainty decreases with sampling duration. Other significant findings are that uncertainty is relatively constant throughout the working volume and that the uncertainty value is not dependent on the size of the reference angle. © 2009 IMechE.