962 resultados para measurement errors


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The distribution of cortical bone in the proximal femur is believed to be a critical component in determining fracture resistance. Current CT technology is limited in its ability to measure cortical thickness, especially in the sub-millimetre range which lies within the point spread function of today's clinical scanners. In this paper, we present a novel technique that is capable of producing unbiased thickness estimates down to 0.3mm. The technique relies on a mathematical model of the anatomy and the imaging system, which is fitted to the data at a large number of sites around the proximal femur, producing around 17,000 independent thickness estimates per specimen. In a series of experiments on 16 cadaveric femurs, estimation errors were measured as -0.01+/-0.58mm (mean+/-1std.dev.) for cortical thicknesses in the range 0.3-4mm. This compares with 0.25+/-0.69mm for simple thresholding and 0.90+/-0.92mm for a variant of the 50% relative threshold method. In the clinically relevant sub-millimetre range, thresholding increasingly fails to detect the cortex at all, whereas the new technique continues to perform well. The many cortical thickness estimates can be displayed as a colour map painted onto the femoral surface. Computation of the surfaces and colour maps is largely automatic, requiring around 15min on a modest laptop computer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a beamforming correction for identifying dipole sources by means of phased microphone array measurements is presented and implemented numerically and experimentally. Conventional beamforming techniques, which are developed for monopole sources, can lead to significant errors when applied to reconstruct dipole sources. A previous correction technique to microphone signals is extended to account for both source location and source power for two-dimensional microphone arrays. The new dipole-beamforming algorithm is developed by modifying the basic source definition used for beamforming. This technique improves the previous signal correction method and yields a beamformer applicable to sources which are suspected to be dipole in nature. Numerical simulations are performed, which validate the capability of this beamformer to recover ideal dipole sources. The beamforming correction is applied to the identification of realistic aeolian-tone dipoles and shows an improvement of array performance on estimating dipole source powers. © 2008 Acoustical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A noncontacting and noninterferometric depth discrimination technique, which is based on differential confocal microscopy, was used to measure the inverse piezoelectric extension of a piezoelectric ceramic lead zirconate titanate actuator. The response characteristics of the actuator with respect to the applied voltage, including displacement, linearity, and hysteresis, were obtained with nanometer measurement accuracy. Errors of the measurement have been analyzed. (C) 2001 Society of Photo-Optical Instrumentation Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.

Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).

A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.

The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.

These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes a measurement of B0- B0 mixing in events produced by electron-positron annihilation at a center of mass energy of 29 GeV. The data were taken by the Mark II detector in the PEP storage ring at the Stanford Linear Accelerator Center between 1981 and 1987, and correspond to a total integrated luminosity of 224pb-1.

We used a new method, based on the kinematics of hadronic events containing two leptons, to provide a measurement of the probability, x, that a hadron, initially containing a b (b) quark decays to a positive (negative) lepton to be X = 0.17+0.15-0.08, with 90% confidence level upper and lower limits of 0.38 and 0.06, respectively, including all estimated systematic errors. Because of the good separation of signal and background, this result is relatively insensitive to various systematic effects which have complicated previous measurements.

We interpret this result as evidence for the mixing of neutral B mesons. Based on existing B0d mixing rate measurements, and some assumptions about the fractions of B0d and B0s mesons present in the data, this result favors maximal mixing of B0s mesons, although it cannot rule out zero B0s mixing at the 90% confidence level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding mixture formation phenomena during the first few cycles of an engine cold start is extremely important for achieving the minimum engine-out emission levels at the time when the catalytic converter is not yet operational. Of special importance is the structure of the charge (film, droplets and vapour) which enters the cylinder during this time interval as well as its concentration profile. However, direct experimental studies of the fuel behaviour in the inlet port have so far been less than fully successful due to the brevity of the process and lack of a suitable experimental technique. We present measurements of the hydrocarbon (HC) concentration in the manifold and port of a production SI engine using the Fast Response Flame Ionisation Detector (FRFID). It has been widely reported in the past few years how the FRFID can be used to study the exhaust and in-cylinder HC concentrations with a time resolution of a few degrees of crank angle, and the device has contributed significantly to the understanding of unburned HC emissions. Using the FRFID in the inlet manifold is difficult because of the presence of liquid droplets, and the low and fluctuating pressure levels, which leads to significant changes in the response time of the instrument. However, using recently developed procedures to correct for the errors caused by these effects, the concentration at the sampling point can be reconstructed to align the FRFID signal with actual events in the engine. © 1996 Society of Automotive Engineers, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Isolation of high neutral lipid-containing microalgae is key to the commercial success of microalgae-based biofuel production. The Nile red fluorescence method has been successfully applied to the determination of lipids in certain microalgae, but has been unsuccessful in many others, particularly those with thick, rigid cell walls that prevent the penetration of the fluorescence dye. The conventional "one sample at a time" method was also time-consuming. In this study, the solvent dimethyl sulfoxide (DMSO) was introduced to microalgal samples as the stain carrier at an elevated temperature. The cellular neutral lipids were determined and quantified using a 96-well plate on a fluorescence spectrophotometer with an excitation wavelength of 530 nm and an emission wavelength of 575 run. An optimized procedure yielded a high correlation coefficient (R-2 = 0.998) with the lipid standard triolein and repeated measurements of replicates. Application of the improved method to several green algal strains gave very reproducible results with relative standard errors of 8.5%, 3.9% and 8.6%, 4.5% for repeatability and reproducibility at two concentration levels (2.0 mu g/mL and 20 mu g/mL), respectively. Moreover, the detection and quantification limits of the improved Nile red staining method were 0.8 mu g/mL and 2.0 mu g/mL for the neutral lipid standard triolein, respectively. The modified method and a conventional gravimetric determination method provided similar results on replicate samples. The 96-well plate-based Nile red method can be used as a high throughput technique for rapid screening of a broader spectrum of naturally-occurring and genetically-modified algal strains and mutants for high neutral lipid/oil production. (C) 2009 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Past measurements of the radiocarbon interhemispheric offset have been restricted to relatively young samples because of a lack of older dendrochronologically secure Southern Hemisphere tree-ring chronologies. The Southern Hemisphere calibration data set SHCal04 earlier than AD 950 utilizes a variable interhemispheric offset derived from measured 2nd millennium AD Southern Hemisphere/Northern Hemisphere sample pairs with the assumption of stable Holocene ocean/ atmosphere interactions. This study extends the range of measured interhemispheric offset values with 20 decadal New Zealand kauri and Irish oak sample pairs from 3 selected time intervals in the 1st millennium AD and is part of a larger program to obtain high-precision Southern Hemisphere 14C data continuously back to 200 BC. We found an average interhemispheric offset of 35 ± 6 yr, which although consistent with previously published 2nd millennium AD measurements, is lower than the offset of 55–58 yr utilized in SHCal04. We concur with McCormac et al. (2008) that the IntCal04 measurement for AD 775 may indeed be slightly too old but also suggest the McCormac results appear excessively young for the interval AD 755–785. In addition, we raise the issue of laboratory bias and calibration errors, and encourage all laboratories to check their consistency with appropriate calibration curves and invest more effort into improving the accuracy of those curves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semiconductor manufactures are increasing reliant on optical emission spectroscopy (OES) to source information on plasma characteristics and process change. However, nonlinearities in the response of OES sensors and errors in their calibration lead to discrepancies in observed wavelength detector response. This paper presents a technique for the retrospective spectral calibration of multiple OES sensors. Underlying methodology is given, and alignment performance is evaluated using OES recordings from a semiconductor plasma process. The paper concludes with a discussion of results and suggests avenues for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new method, based on inertial sensors, to automatically measure at high frequency the durations of the main phases of ski jumping (i.e. take-off release, take-off, and early flight). The kinematics of the ski jumping movement were recorded by four inertial sensors, attached to the thigh and shank of junior athletes, for 40 jumps performed during indoor conditions and 36 jumps in field conditions. An algorithm was designed to detect temporal events from the recorded signals and to estimate the duration of each phase. These durations were evaluated against a reference camera-based motion capture system and by trainers conducting video observations. The precision for the take-off release and take-off durations (indoor < 39 ms, outdoor = 27 ms) can be considered technically valid for performance assessment. The errors for early flight duration (indoor = 22 ms, outdoor = 119 ms) were comparable to the trainers' variability and should be interpreted with caution. No significant changes in the error were noted between indoor and outdoor conditions, and individual jumping technique did not influence the error of take-off release and take-off. Therefore, the proposed system can provide valuable information for performance evaluation of ski jumpers during training sessions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Airborne laser altimetry has the potential to make frequent detailed observations that are important for many aspects of studying land surface processes. However, the uncertainties inherent in airborne laser altimetry data have rarely been well measured. Uncertainty is often specified as generally as 20cm in elevation, and 40cm planimetric. To better constrain these uncertainties, we present an analysis of several datasets acquired specifically to study the temporal consistency of laser altimetry data, and thus assess its operational value. The error budget has three main components, each with a time regime. For measurements acquired less than 50ms apart, elevations have a local standard deviation in height of 3.5cm, enabling the local measurement of surface roughness of the order of 5cm. Points acquired seconds apart acquire an additional random error due to Differential Geographic Positioning System (DGPS) fluctuation. Measurements made up to an hour apart show an elevation drift of 7cm over a half hour. Over months, this drift gives rise to a random elevation offset between swathes, with an average of 6.4cm. The RMS planimetric error in point location was derived as 37.4cm. We conclude by considering the consequences of these uncertainties on the principle application of laser altimetry in the UK, intertidal zone monitoring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Medication errors in general practice are an important source of potentially preventable morbidity and mortality. Building on previous descriptive, qualitative and pilot work, we sought to investigate the effectiveness, cost-effectiveness and likely generalisability of a complex pharm acist-led IT-based intervention aiming to improve prescribing safety in general practice. Objectives: We sought to: • Test the hypothesis that a pharmacist-led IT-based complex intervention using educational outreach and practical support is more effective than simple feedback in reducing the proportion of patients at risk from errors in prescribing and medicines management in general practice. • Conduct an economic evaluation of the cost per error avoided, from the perspective of the National Health Service (NHS). • Analyse data recorded by pharmacists, summarising the proportions of patients judged to be at clinical risk, the actions recommended by pharmacists, and actions completed in the practices. • Explore the views and experiences of healthcare professionals and NHS managers concerning the intervention; investigate potential explanations for the observed effects, and inform decisions on the future roll-out of the pharmacist-led intervention • Examine secular trends in the outcome measures of interest allowing for informal comparison between trial practices and practices that did not participate in the trial contributing to the QRESEARCH database. Methods Two-arm cluster randomised controlled trial of 72 English general practices with embedded economic analysis and longitudinal descriptive and qualitative analysis. Informal comparison of the trial findings with a national descriptive study investigating secular trends undertaken using data from practices contributing to the QRESEARCH database. The main outcomes of interest were prescribing errors and medication monitoring errors at six- and 12-months following the intervention. Results: Participants in the pharmacist intervention arm practices were significantly less likely to have been prescribed a non-selective NSAID without a proton pump inhibitor (PPI) if they had a history of peptic ulcer (OR 0.58, 95%CI 0.38, 0.89), to have been prescribed a beta-blocker if they had asthma (OR 0.73, 95% CI 0.58, 0.91) or (in those aged 75 years and older) to have been prescribed an ACE inhibitor or diuretic without a measurement of urea and electrolytes in the last 15 months (OR 0.51, 95% CI 0.34, 0.78). The economic analysis suggests that the PINCER pharmacist intervention has 95% probability of being cost effective if the decision-maker’s ceiling willingness to pay reaches £75 (6 months) or £85 (12 months) per error avoided. The intervention addressed an issue that was important to professionals and their teams and was delivered in a way that was acceptable to practices with minimum disruption of normal work processes. Comparison of the trial findings with changes seen in QRESEARCH practices indicated that any reductions achieved in the simple feedback arm were likely, in the main, to have been related to secular trends rather than the intervention. Conclusions Compared with simple feedback, the pharmacist-led intervention resulted in reductions in proportions of patients at risk of prescribing and monitoring errors for the primary outcome measures and the composite secondary outcome measures at six-months and (with the exception of the NSAID/peptic ulcer outcome measure) 12-months post-intervention. The intervention is acceptable to pharmacists and practices, and is likely to be seen as costeffective by decision makers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to validate the reported precision of space‐based atmospheric composition measurements, validation studies often focus on measurements in the tropical stratosphere, where natural variability is weak. The scatter in tropical measurements can then be used as an upper limit on single‐profile measurement precision. Here we introduce a method of quantifying the scatter of tropical measurements which aims to minimize the effects of short‐term atmospheric variability while maintaining large enough sample sizes that the results can be taken as representative of the full data set. We apply this technique to measurements of O3, HNO3, CO, H2O, NO, NO2, N2O, CH4, CCl2F2, and CCl3F produced by the Atmospheric Chemistry Experiment–Fourier Transform Spectrometer (ACE‐FTS). Tropical scatter in the ACE‐FTS retrievals is found to be consistent with the reported random errors (RREs) for H2O and CO at altitudes above 20 km, validating the RREs for these measurements. Tropical scatter in measurements of NO, NO2, CCl2F2, and CCl3F is roughly consistent with the RREs as long as the effect of outliers in the data set is reduced through the use of robust statistics. The scatter in measurements of O3, HNO3, CH4, and N2O in the stratosphere, while larger than the RREs, is shown to be consistent with the variability simulated in the Canadian Middle Atmosphere Model. This result implies that, for these species, stratospheric measurement scatter is dominated by natural variability, not random error, which provides added confidence in the scientific value of single‐profile measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ruminant husbandry is a major source of anthropogenic greenhouse gases (GHG). Filling knowledge gaps and providing expert recommendation are important for defining future research priorities, improving methodologies and establishing science-based GHG mitigation solutions to government and non-governmental organisations, advisory/extension networks, and the ruminant livestock sector. The objectives of this review is to summarize published literature to provide a detailed assessment of the methodologies currently in use for measuring enteric methane (CH4) emission from individual animals under specific conditions, and give recommendations regarding their application. The methods described include respiration chambers and enclosures, sulphur hexafluoride tracer (SF6) technique, and techniques based on short-term measurements of gas concentrations in samples of exhaled air. This includes automated head chambers (e.g. the GreenFeed system), the use of carbon dioxide (CO2) as a marker, and (handheld) laser CH4 detection. Each of the techniques are compared and assessed on their capability and limitations, followed by methodology recommendations. It is concluded that there is no ‘one size fits all’ method for measuring CH4 emission by individual animals. Ultimately, the decision as to which method to use should be based on the experimental objectives and resources available. However, the need for high throughput methodology e.g. for screening large numbers of animals for genomic studies, does not justify the use of methods that are inaccurate. All CH4 measurement techniques are subject to experimental variation and random errors. Many sources of variation must be considered when measuring CH4 concentration in exhaled air samples without a quantitative or at least regular collection rate, or use of a marker to indicate (or adjust) for the proportion of exhaled CH4 sampled. Consideration of the number and timing of measurements relative to diurnal patterns of CH4 emission and respiratory exchange are important, as well as consideration of feeding patterns and associated patterns of rumen fermentation rate and other aspects of animal behaviour. Regardless of the method chosen, appropriate calibrations and recovery tests are required for both method establishment and routine operation. Successful and correct use of methods requires careful attention to detail, rigour, and routine self-assessment of the quality of the data they provide.