987 resultados para measurement errors
Resumo:
This is the first paper in a study on the influence of the environment on the crack tip strain field for AISI 4340. A stressing stage for the environmental scanning electron microscope (ESEM) was constructed which was capable of applying loads up to 60 kN to fracture-mechanics samples. The measurement of the crack tip strain field required preparation (by electron lithography or chemical etching) of a system of reference points spaced at similar to 5 mu m intervals on the sample surface, loading the sample inside an electron microscope, image processing procedures to measure the displacement at each reference point and calculation of the strain field. Two algorithms to calculate strain were evaluated. Possible sources of errors were calculation errors due to the algorithm, errors inherent in the image processing procedure and errors due to the limited precision of the displacement measurements. Estimation of the contribution of each source of error was performed. The technique allows measurement of the crack tip strain field over an area of 50 x 40 mu m with a strain precision better than +/- 0.02 at distances larger than 5 mu m from the crack tip. (C) 1999 Kluwer Academic Publishers.
Resumo:
We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber , Phys. Rev. Lett. 86, 4402 (2001)].
Resumo:
We describe in detail the theory underpinning the measurement of density matrices of a pair of quantum two-level systems (qubits). Our particular emphasis is on qubits realized by the two polarization degrees of freedom of a pair of entangled photons generated in a down-conversion experiment; however, the discussion applies in general, regardless of the actual physical realization. Two techniques are discussed, namely, a tomographic reconstruction (in which the density matrix is linearly related to a set of measured quantities) and a maximum likelihood technique which requires numerical optimization (but has the advantage of producing density matrices that are always non-negative definite). In addition, a detailed error analysis is presented, allowing errors in quantities derived from the density matrix, such as the entropy or entanglement of formation, to be estimated. Examples based on down-conversion experiments are used to illustrate our results.
Resumo:
We propose a new method, based on inertial sensors, to automatically measure at high frequency the durations of the main phases of ski jumping (i.e. take-off release, take-off, and early flight). The kinematics of the ski jumping movement were recorded by four inertial sensors, attached to the thigh and shank of junior athletes, for 40 jumps performed during indoor conditions and 36 jumps in field conditions. An algorithm was designed to detect temporal events from the recorded signals and to estimate the duration of each phase. These durations were evaluated against a reference camera-based motion capture system and by trainers conducting video observations. The precision for the take-off release and take-off durations (indoor < 39 ms, outdoor = 27 ms) can be considered technically valid for performance assessment. The errors for early flight duration (indoor = 22 ms, outdoor = 119 ms) were comparable to the trainers' variability and should be interpreted with caution. No significant changes in the error were noted between indoor and outdoor conditions, and individual jumping technique did not influence the error of take-off release and take-off. Therefore, the proposed system can provide valuable information for performance evaluation of ski jumpers during training sessions.
Resumo:
BACKGROUND: Measurement of plasma renin is important for the clinical assessment of hypertensive patients. The most common methods for measuring plasma renin are the plasma renin activity (PRA) assay and the renin immunoassay. The clinical application of renin inhibitor therapy has thrown into focus the differences in information provided by activity assays and immunoassays for renin and prorenin measurement and has drawn attention to the need for precautions to ensure their accurate measurement. CONTENT: Renin activity assays and immunoassays provide related but different information. Whereas activity assays measure only active renin, immunoassays measure both active and inhibited renin. Particular care must be taken in the collection and processing of blood samples and in the performance of these assays to avoid errors in renin measurement. Both activity assays and immunoassays are susceptible to renin overestimation due to prorenin activation. In addition, activity assays performed with peptidase inhibitors may overestimate the degree of inhibition of PRA by renin inhibitor therapy. Moreover, immunoassays may overestimate the reactive increase in plasma renin concentration in response to renin inhibitor therapy, owing to the inhibitor promoting conversion of prorenin to an open conformation that is recognized by renin immunoassays. CONCLUSIONS: The successful application of renin assays to patient care requires that the clinician and the clinical chemist understand the information provided by these assays and of the precautions necessary to ensure their accuracy.
Resumo:
A new method of measuring joint angle using a combination of accelerometers and gyroscopes is presented. The method proposes a minimal sensor configuration with one sensor module mounted on each segment. The model is based on estimating the acceleration of the joint center of rotation by placing a pair of virtual sensors on the adjacent segments at the center of rotation. In the proposed technique, joint angles are found without the need for integration, so absolute angles can be obtained which are free from any source of drift. The model considers anatomical aspects and is personalized for each subject prior to each measurement. The method was validated by measuring knee flexion-extension angles of eight subjects, walking at three different speeds, and comparing the results with a reference motion measurement system. The results are very close to those of the reference system presenting very small errors (rms = 1.3, mean = 0.2, SD = 1.1 deg) and excellent correlation coefficients (0.997). The algorithm is able to provide joint angles in real-time, and ready for use in gait analysis. Technically, the system is portable, easily mountable, and can be used for long term monitoring without hindrance to natural activities.
Resumo:
Il est important pour les entreprises de compresser les informations détaillées dans des sets d'information plus compréhensibles. Au chapitre 1, je résume et structure la littérature sur le sujet « agrégation d'informations » en contrôle de gestion. Je récapitule l'analyse coûts-bénéfices que les comptables internes doivent considérer quand ils décident des niveaux optimaux d'agrégation d'informations. Au-delà de la perspective fondamentale du contenu d'information, les entreprises doivent aussi prendre en considération des perspectives cogni- tives et comportementales. Je développe ces aspects en faisant la part entre la comptabilité analytique, les budgets et plans, et la mesure de la performance. Au chapitre 2, je focalise sur un biais spécifique qui se crée lorsque les informations incertaines sont agrégées. Pour les budgets et plans, des entreprises doivent estimer les espérances des coûts et des durées des projets, car l'espérance est la seule mesure de tendance centrale qui est linéaire. A la différence de l'espérance, des mesures comme le mode ou la médiane ne peuvent pas être simplement additionnés. En considérant la forme spécifique de distributions des coûts et des durées, l'addition des modes ou des médianes résultera en une sous-estimation. Par le biais de deux expériences, je remarque que les participants tendent à estimer le mode au lieu de l'espérance résultant en une distorsion énorme de l'estimati¬on des coûts et des durées des projets. Je présente également une stratégie afin d'atténuer partiellement ce biais. Au chapitre 3, j'effectue une étude expérimentale pour comparer deux approches d'esti¬mation du temps qui sont utilisées en comptabilité analytique, spécifiquement « coûts basés sur les activités (ABC) traditionnelles » et « time driven ABC » (TD-ABC). Au contraire des affirmations soutenues par les défenseurs de l'approche TD-ABC, je constate que cette dernière n'est pas nécessairement appropriée pour les calculs de capacité. Par contre, je démontre que le TD-ABC est plus approprié pour les allocations de coûts que l'approche ABC traditionnelle. - It is essential for organizations to compress detailed sets of information into more comprehensi¬ve sets, thereby, establishing sharp data compression and good decision-making. In chapter 1, I review and structure the literature on information aggregation in management accounting research. I outline the cost-benefit trade-off that management accountants need to consider when they decide on the optimal levels of information aggregation. Beyond the fundamental information content perspective, organizations also have to account for cognitive and behavi¬oral perspectives. I elaborate on these aspects differentiating between research in cost accounti¬ng, budgeting and planning, and performance measurement. In chapter 2, I focus on a specific bias that arises when probabilistic information is aggregated. In budgeting and planning, for example, organizations need to estimate mean costs and durations of projects, as the mean is the only measure of central tendency that is linear. Different from the mean, measures such as the mode or median cannot simply be added up. Given the specific shape of cost and duration distributions, estimating mode or median values will result in underestimations of total project costs and durations. In two experiments, I find that participants tend to estimate mode values rather than mean values resulting in large distortions of estimates for total project costs and durations. I also provide a strategy that partly mitigates this bias. In the third chapter, I conduct an experimental study to compare two approaches to time estimation for cost accounting, i.e., traditional activity-based costing (ABC) and time-driven ABC (TD-ABC). Contrary to claims made by proponents of TD-ABC, I find that TD-ABC is not necessarily suitable for capacity computations. However, I also provide evidence that TD-ABC seems better suitable for cost allocations than traditional ABC.
Resumo:
OBJECTIVE: The estimation of blood pressure is dependent on the accuracy of the measurement devices. We compared blood pressure readings obtained with an automated oscillometric arm-cuff device and with an automated oscillometric wrist-cuff device and then assessed the prevalence of defined blood pressure categories. METHODS: Within a population-based survey in Dar es Salaam (Tanzania), we selected all participants with a blood pressure >/= 160/95 mmHg (n=653) and a random sample of participants with blood pressure <160/95 mmHg (n=662), based on the first blood pressure reading. Blood pressure was reassessed 2 years later for 464 and 410 of the participants, respectively. In these 874 subjects, we compared the prevalence of blood pressure categories as estimated with each device. RESULTS: Overall, the wrist device gave higher blood pressure readings than the arm device (difference in systolic/diastolic blood pressure: 6.3 +/- 17.3/3.7 +/- 11.8 mmHg, P<0.001). However, the arm device tended to give lower readings than the wrist device for high blood pressure values. The prevalence of blood pressure categories differed substantially depending on which device was used, 29% and 14% for blood pressure <120/80 mmHg (arm device versus wrist device, respectively), 30% and 33% for blood pressure 120-139/80-89 mmHg, 17% and 26% for blood pressure 140-159/90-99 mmHg, 12% and 13% for blood pressure 160-179/100-109 mmHg and 13% and 14% for blood pressure >/= 180/110 mmHg. CONCLUSIONS: A large discrepancy in the estimated prevalence of blood pressure categories was observed using two different automatic measurement devices. This emphasizes that prevalence estimates based on automatic devices should be considered with caution.
Resumo:
We have designed and built an experimental device, which we called a "thermoelectric bridge." Its primary purpose is simultaneous measurement of the relative Peltier and Seebeck coefficients. The systematic errors for both coefficients are equal with this device and manipulation is not necessary between the measurement of one coefficient and the other. Thus, this device is especially suitable for verifying their linear relation postulated by Lord Kelvin. Also, simultaneous measurement of thermal conductivity is described in the text. A sample is made up of the couple nickel¿platinum, taking measurements in the range of ¿20¿60°C and establishing the dependence of each coefficient with temperature, with nearly equal random errors ±0.2%, and systematic errors estimated at ¿0.5%. The aforementioned Kelvin relation is verified in this range from these results, proving that the behavioral deviations are ¿0.3% contained in the uncertainty ±0.5% caused by the propagation of errors
Resumo:
When researchers introduce a new test they have to demonstrate that it is valid, using unbiased designs and suitable statistical procedures. In this article we use Monte Carlo analyses to highlight how incorrect statistical procedures (i.e., stepwise regression, extreme scores analyses) or ignoring regression assumptions (e.g., heteroscedasticity) contribute to wrong validity estimates. Beyond these demonstrations, and as an example, we re-examined the results reported by Warwick, Nettelbeck, and Ward (2010) concerning the validity of the Ability Emotional Intelligence Measure (AEIM). Warwick et al. used the wrong statistical procedures to conclude that the AEIM was incrementally valid beyond intelligence and personality traits in predicting various outcomes. In our re-analysis, we found that the reliability-corrected multiple correlation of their measures with personality and intelligence was up to .69. Using robust statistical procedures and appropriate controls, we also found that the AEIM did not predict incremental variance in GPA, stress, loneliness, or well-being, demonstrating the importance for testing validity instead of looking for it.
Resumo:
Sap flow could be used as physiological parameter to assist irrigation of screen house citrus nursery trees by continuous water consumption estimation. Herein we report a first set of results indicating the potential use of the heat dissipation method for sap flow measurement in containerized citrus nursery trees. 'Valencia' sweet orange [Citrus sinensis (L.) Osbeck] budded on 'Rangpur' lime (Citrus limonia Osbeck) was evaluated for 30 days during summer. Heat dissipation probes and thermocouple sensors were constructed with low-cost and easily available materials in order to improve accessibility of the method. Sap flow showed high correlation to air temperature inside the screen house. However, errors due to natural thermal gradient and plant tissue injuries affected measurement precision. Transpiration estimated by sap flow measurement was four times higher than gravimetric measurement. Improved micro-probes, adequate method calibration, and non-toxic insulating materials should be further investigated.
Resumo:
Electrical impedance tomography (EIT) allows the measurement of intra-thoracic impedance changes related to cardiovascular activity. As a safe and low-cost imaging modality, EIT is an appealing candidate for non-invasive and continuous haemodynamic monitoring. EIT has recently been shown to allow the assessment of aortic blood pressure via the estimation of the aortic pulse arrival time (PAT). However, finding the aortic signal within EIT image sequences is a challenging task: the signal has a small amplitude and is difficult to locate due to the small size of the aorta and the inherent low spatial resolution of EIT. In order to most reliably detect the aortic signal, our objective was to understand the effect of EIT measurement settings (electrode belt placement, reconstruction algorithm). This paper investigates the influence of three transversal belt placements and two commonly-used difference reconstruction algorithms (Gauss-Newton and GREIT) on the measurement of aortic signals in view of aortic blood pressure estimation via EIT. A magnetic resonance imaging based three-dimensional finite element model of the haemodynamic bio-impedance properties of the human thorax was created. Two simulation experiments were performed with the aim to (1) evaluate the timing error in aortic PAT estimation and (2) quantify the strength of the aortic signal in each pixel of the EIT image sequences. Both experiments reveal better performance for images reconstructed with Gauss-Newton (with a noise figure of 0.5 or above) and a belt placement at the height of the heart or higher. According to the noise-free scenarios simulated, the uncertainty in the analysis of the aortic EIT signal is expected to induce blood pressure errors of at least ± 1.4 mmHg.
Resumo:
Thermal and air conditions inside animal facilities change during the day due to the influence of the external environment. For statistical and geostatistical analyses to be representative, a large number of points spatially distributed in the facility area must be monitored. This work suggests that the time variation of environmental variables of interest for animal production, monitored within animal facility, can be modeled accurately from discrete-time records. The aim of this study was to develop a numerical method to correct the temporal variations of these environmental variables, transforming the data so that such observations are independent of the time spent during the measurement. The proposed method approached values recorded with time delays to those expected at the exact moment of interest, if the data were measured simultaneously at the moment at all points distributed spatially. The correction model for numerical environmental variables was validated for environmental air temperature parameter, and the values corrected by the method did not differ by Tukey's test at 5% significance of real values recorded by data loggers.
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
Airborne laser altimetry has the potential to make frequent detailed observations that are important for many aspects of studying land surface processes. However, the uncertainties inherent in airborne laser altimetry data have rarely been well measured. Uncertainty is often specified as generally as 20cm in elevation, and 40cm planimetric. To better constrain these uncertainties, we present an analysis of several datasets acquired specifically to study the temporal consistency of laser altimetry data, and thus assess its operational value. The error budget has three main components, each with a time regime. For measurements acquired less than 50ms apart, elevations have a local standard deviation in height of 3.5cm, enabling the local measurement of surface roughness of the order of 5cm. Points acquired seconds apart acquire an additional random error due to Differential Geographic Positioning System (DGPS) fluctuation. Measurements made up to an hour apart show an elevation drift of 7cm over a half hour. Over months, this drift gives rise to a random elevation offset between swathes, with an average of 6.4cm. The RMS planimetric error in point location was derived as 37.4cm. We conclude by considering the consequences of these uncertainties on the principle application of laser altimetry in the UK, intertidal zone monitoring.