909 resultados para Measurement error models
Resumo:
Background and Aims: The objective of the study was to compare data obtained from the Cosmed K4 b2 and the Deltatrac II™ metabolic cart for the purpose of determining the validity of the Cosmed K4 b2 in measuring resting energy expenditure. Methods: Nine adult subjects (four male, five female) were measured. Resting energy expenditure was measured in consecutive sessions using the Cosmed K4 b2, the Deltatrac II™ metabolic cart separately and the Cosmed K4 b2 and Deltatrac II™ metabolic cart simultaneously, performed in random order. Resting energy expenditure (REE) data from both devices were then compared with values obtained from predictive equations. Results: Bland and Altman analysis revealed a mean bias for the four variables, REE, respiratory quotient (RQ), VCO2, VO2 between data obtained from Cosmed K4 b2 and Deltatrac II™ metabolic cart of 268 ± 702 kcal/day, -0.0±0.2, 26.4±118.2 and 51.6±126.5 ml/min, respectively. Corresponding limits of agreement for the same four variables were all large. Also, Bland and Altman analysis revealed a larger mean bias between predicted REE and measured REE using Cosmed K4 b2 data (-194±603 kcal/day) than using Deltatrac™ metabolic cart data (73±197 kcal/day). Conclusions: Variability between the two devices was very high and a degree of measurement error was detected. Data from the Cosmed K4 b2 provided variable results on comparison with predicted values, thus, would seem an invalid device for measuring adults. © 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
The Fabens method is commonly used to estimate growth parameters k and l infinity in the von Bertalanffy model from tag-recapture data. However, the Fabens method of estimation has an inherent bias when individual growth is variable. This paper presents an asymptotically unbiassed method using a maximum likelihood approach that takes account of individual variability in both maximum length and age-at-tagging. It is assumed that each individual's growth follows a von Bertalanffy curve with its own maximum length and age-at-tagging. The parameter k is assumed to be a constant to ensure that the mean growth follows a von Bertalanffy curve and to avoid overparameterization. Our method also makes more efficient use nf thp measurements at tno and recapture and includes diagnostic techniques for checking distributional assumptions. The method is reasonably robust and performs better than the Fabens method when individual growth differs from the von Bertalanffy relationship. When measurement error is negligible, the estimation involves maximizing the profile likelihood of one parameter only. The method is applied to tag-recapture data for the grooved tiger prawn (Penaeus semisulcatus) from the Gulf of Carpentaria, Australia.
Resumo:
Weighing lysimeters are the standard method for directly measuring evapotranspiration (ET). This paper discusses the construction, installation, and performance of two (1.52 m × 1.52 m × 2.13-m deep) repacked weighing lysimeters for measuring ET of corn and soybean in West Central Nebraska. The cost of constructing and installing each lysimeter was approximately US $12,500, which could vary depending on the availability and cost of equipment and labor. The resolution of the lysimeters was 0.0001 mV V-1, which was limited by the data processing and storage resolution of the datalogger. This resolution was equivalent to 0.064 and 0.078 mm of ET for the north and south lysimeters, respectively. Since the percent measurement error decreases with the magnitude of the ET measured, this resolution is adequate for measuring ET for daily and longer periods, but not for shorter time steps. This resolution would result in measurement errors of less than 5% for measuring ET values of ≥3 mm, but the percent error rapidly increases for lower ET values. The resolution of the lysimeters could potentially be improved by choosing a datalogger that could process and store data with a higher resolution than the one used in this study.
Resumo:
Background Segmental biomechanics of the scoliotic spine are important since the overall spinal deformity is comprised of the cumulative coronal and axial rotations of individual joints. This study investigates the coronal plane segmental biomechanics for adolescent idiopathic scoliosis patients in response to physiologically relevant axial compression. Methods Individual spinal joint compliance in the coronal plane was measured for a series of 15 idiopathic scoliosis patients using axially loaded magnetic resonance imaging. Each patient was first imaged in the supine position with no axial load, and then again following application of an axial compressive load. Coronal plane disc wedge angles in the unloaded and loaded configurations were measured. Joint moments exerted by the axial compressive load were used to derive estimates of individual joint compliance. Findings The mean standing major Cobb angle for this patient series was 46°. Mean intra-observer measurement error for endplate inclination was 1.6°. Following loading, initially highly wedged discs demonstrated a smaller change in wedge angle, than less wedged discs for certain spinal levels (+ 2,+1,− 2 relative to the apex, (p < 0.05)). Highly wedged discs were observed near the apex of the curve, which corresponded to lower joint compliance in the apical region. Interpretation While individual patients exhibit substantial variability in disc wedge angles and joint compliance, overall there is a pattern of increased disc wedging near the curve apex, and reduced joint compliance in this region. Approaches such as this can provide valuable biomechanical data on in vivo spinal biomechanics of the scoliotic spine, for analysis of deformity progression and surgical planning.
Resumo:
We present substantial evidence for the existence of a bias in the distribution of births of leading US politicians in favour of those who were the eldest in their cohort at school. This result adds to the research on the long-term effects of relative age among peers at school. We discuss parametric and non-parametric tests to identify this effect, and we show that it is not driven by measurement error, redshirting or a sorting effect of highly educated parents. The magnitude of the effect that we estimate is larger than what other studies on ‘relative age effects’ have found for broader populations but is in general consistent with research that looks at professional sportsmen. We also find that relative age does not seem to correlate with the quality of elected politicians.
Resumo:
Head motion (HM) is a well known confound in analyses of functional MRI (fMRI) data. Neuroimaging researchers therefore typically treat HM as a nuisance covariate in their analyses. Even so, it is possible that HM shares a common genetic influence with the trait of interest. Here we investigate the extent to which this relationship is due to shared genetic factors, using HM extracted from resting-state fMRI and maternal and self report measures of Inattention and Hyperactivity-Impulsivity from the Strengths and Weaknesses of ADHD Symptoms and Normal Behaviour (SWAN) scales. Our sample consisted of healthy young adult twins (N = 627 (63% females) including 95 MZ and 144 DZ twin pairs, mean age 22, who had mother-reported SWAN; N = 725 (58% females) including 101 MZ and 156 DZ pairs, mean age 25, with self reported SWAN). This design enabled us to distinguish genetic from environmental factors in the association between head movement and ADHD scales. HM was moderately correlated with maternal reports of Inattention (r = 0.17, p-value = 7.4E-5) and Hyperactivity-Impulsivity (r = 0.16, p-value = 2.9E-4), and these associations were mainly due to pleiotropic genetic factors with genetic correlations [95% CIs] of rg = 0.24 [0.02, 0.43] and rg = 0.23 [0.07, 0.39]. Correlations between self-reports and HM were not significant, due largely to increased measurement error. These results indicate that treating HM as a nuisance covariate in neuroimaging studies of ADHD will likely reduce power to detect between-group effects, as the implicit assumption of independence between HM and Inattention or Hyperactivity-Impulsivity is not warranted. The implications of this finding are problematic for fMRI studies of ADHD, as failing to apply HM correction is known to increase the likelihood of false positives. We discuss two ways to circumvent this problem: censoring the motion contaminated frames of the RS-fMRI scan or explicitly modeling the relationship between HM and Inattention or Hyperactivity-Impulsivity
Resumo:
Thickness measurements derived from optical coherence tomography (OCT) images of the eye are a fundamental clinical and research metric, since they provide valuable information regarding the eye’s anatomical and physiological characteristics, and can assist in the diagnosis and monitoring of numerous ocular conditions. Despite the importance of these measurements, limited attention has been given to the methods used to estimate thickness in OCT images of the eye. Most current studies employing OCT use an axial thickness metric, but there is evidence that axial thickness measures may be biased by tilt and curvature of the image. In this paper, standard axial thickness calculations are compared with a variety of alternative metrics for estimating tissue thickness. These methods were tested on a data set of wide-field chorio-retinal OCT scans (field of view (FOV) 60° x 25°) to examine their performance across a wide region of interest and to demonstrate the potential effect of curvature of the posterior segment of the eye on the thickness estimates. Similarly, the effect of image tilt was systematically examined with the same range of proposed metrics. The results demonstrate that image tilt and curvature of the posterior segment can affect axial tissue thickness calculations, while alternative metrics, which are not biased by these effects, should be considered. This study demonstrates the need to consider alternative methods to calculate tissue thickness in order to avoid measurement error due to image tilt and curvature.
Resumo:
The reduction in natural frequencies,however small, of a civil engineering structure, is the first and the easiest method of estimating its impending damage. As a first level screening for health-monitoring, information on the frequency reduction of a few fundamentalmodes can be used to estimate the positions and the magnitude of damage in a smeared fashion. The paper presents the Eigen value sensitivity equations, derived from first-order perturbation technique, for typical infra-structural systems like a simply supported bridge girder, modelled as a beam, an endbearing pile, modelled as an axial rod and a simply supported plate as a continuum dynamic system. A discrete structure, like a building frame is solved for damage using Eigen-sensitivity derived by a computationalmodel. Lastly, neural network based damage identification is also demonstrated for a simply supported bridge beam, where the known-pairs of damage-frequency vector is used to train a neural network. The performance of these methods under the influence of measurement error is outlined. It is hoped that the developed method could be integrated in a typical infra-structural management program, such that magnitudes of damage and their positions can be obtained using acquired natural frequencies, synthesized from the excited/ambient vibration signatures.
Resumo:
Background Traffic offences have been considered an important predictor of crash involvement, and have often been used as a proxy safety variable for crashes. However the association between crashes and offences has never been meta-analysed and the population effect size never established. Research is yet to determine the extent to which this relationship may be spuriously inflated through systematic measurement error, with obvious implications for researchers endeavouring to accurately identify salient factors predictive of crashes. Methodology and Principal Findings Studies yielding a correlation between crashes and traffic offences were collated and a meta-analysis of 144 effects drawn from 99 road safety studies conducted. Potential impact of factors such as age, time period, crash and offence rates, crash severity and data type, sourced from either self-report surveys or archival records, were considered and discussed. After weighting for sample size, an average correlation of r = .18 was observed over the mean time period of 3.2 years. Evidence emerged suggesting the strength of this correlation is decreasing over time. Stronger correlations between crashes and offences were generally found in studies involving younger drivers. Consistent with common method variance effects, a within country analysis found stronger effect sizes in self-reported data even controlling for crash mean. Significance The effectiveness of traffic offences as a proxy for crashes may be limited. Inclusion of elements such as independently validated crash and offence histories or accurate measures of exposure to the road would facilitate a better understanding of the factors that influence crash involvement.
Resumo:
16-electrode phantoms are developed and studied with a simple instrumentation developed for Electrical Impedance Tomography. An analog instrumentation is developed with a sinusoidal current generator and signal conditioner circuit. Current generator is developed withmodified Howland constant current source fed by a voltage controlled oscillator and the signal conditioner circuit consisting of an instrumentation amplifier and a narrow band pass filter. Electronic hardware is connected to the electrodes through a DIP switch based multiplexer module. Phantoms with different electrode size and position are developed and the EIT forward problem is studied using the forward solver. A low frequency low magnitude sinusoidal current is injected to the surface electrodes surrounding the phantom boundary and the differential potential is measured by a digital multimeter. Comparing measured potential with the simulated data it is intended to reduce the measurement error and an optimum phantom geometry is suggested. Result shows that the common mode electrode reduces the common mode error of the EIT electronics and reduces the error potential in the measured data. Differential potential is reduced up to 67 mV at the voltage electrode pair opposite to the current electrodes. Offset potential is measured and subtracted from the measured data for further correction. It is noticed that the potential data pattern depends on the electrode width and the optimum electrode width is suggested. It is also observed that measured potential becomes acceptable with a 20 mm solution column above and below the electrode array level.
Resumo:
The spatial error structure of daily precipitation derived from the latest version 7 (v7) tropical rainfall measuring mission (TRMM) level 2 data products are studied through comparison with the Asian precipitation highly resolved observational data integration toward evaluation of the water resources (APHRODITE) data over a subtropical region of the Indian subcontinent for the seasonal rainfall over 6 years from June 2002 to September 2007. The data products examined include v7 data from the TRMM radiometer Microwave Imager (TMI) and radar precipitation radar (PR), namely, 2A12, 2A25, and 2B31 (combined data from PR and TMI). The spatial distribution of uncertainty from these data products were quantified based on performance metrics derived from the contingency table. For the seasonal daily precipitation over a subtropical basin in India, the data product of 2A12 showed greater skill in detecting and quantifying the volume of rainfall when compared with the 2A25 and 2B31 data products. Error characterization using various error models revealed that random errors from multiplicative error models were homoscedastic and that they better represented rainfall estimates from 2A12 algorithm. Error decomposition techniques performed to disentangle systematic and random errors verify that the multiplicative error model representing rainfall from 2A12 algorithm successfully estimated a greater percentage of systematic error than 2A25 or 2B31 algorithms. Results verify that although the radiometer derived 2A12 rainfall data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that the 2A12 rainfall estimates are in a very good agreement with the reference estimates for the data period considered.
Resumo:
This paper describes multiple field-coupled simulations and device characterization of fully CMOS-MEMS-compatible smart gas sensors. The sensor structure is designated for gas/vapour detection at high temperatures (>300 °C) with low power consumption, high sensitivity and competent mechanic robustness employing the silicon-on-insulator (SOI) wafer technology, CMOS process and micromachining techniques. The smart gas sensor features micro-heaters using p-type MOSFETs or polysilicon resistors and differentially transducing circuits for in situ temperature measurement. Physical models and 3D electro-thermo-mechanical simulations of the SOI micro-hotplate induced by Joule, self-heating, mechanic stress and piezoresistive effects are provided. The electro-thermal effect initiates and thus affects electronic and mechanical characteristics of the sensor devices at high temperatures. Experiments on variation and characterization of micro-heater resistance, power consumption, thermal imaging, deformation interferometry and dynamic thermal response of the SOI micro-hotplate have been presented and discussed. The full integration of the smart gas sensor with automatically temperature-reading ICs demonstrates the lowest power consumption of 57 mW at 300 °C and fast thermal response of 10 ms. © 2008 IOP Publishing Ltd.
Resumo:
Building on Item Response Theory we introduce students’ optimal behavior in multiple-choice tests. Our simulations indicate that the optimal penalty is relatively high, because although correction for guessing discriminates against risk-averse subjects, this effect is small compared with the measurement error that the penalty prevents. This result obtains when knowledge is binary or partial, under different normalizations of the score, when risk aversion is related to knowledge and when there is a pass-fail break point. We also find that the mean degree of difficulty should be close to the mean level of knowledge and that the variance of difficulty should be high.
Resumo:
Using second-order autocorrelation conception, a novel method and instrument for accurately measuring interval between two linearly polarized ultrashort pulses with real time were presented. The experiment demonstrated that the measuring method and instrument were simple and accurate (the measurement error <5 fs). During measuring, there was no moving element resulting in dynamic measurement error.