63 resultados para Error of measurement
Resumo:
In this paper we present an empirical analysis of the residential demand for electricity using annual aggregate data at the state level for 48 US states from 1995 to 2007. Earlier literature has examined residential energy consumption at the state level using annual or monthly data, focusing on the variation in price elasticities of demand across states or regions, but has failed to recognize or address two major issues. The first is that, when fitting dynamic panel models, the lagged consumption term in the right-hand side of the demand equation is endogenous. This has resulted in potentially inconsistent estimates of the long-run price elasticity of demand. The second is that energy price is likely mismeasured.
Resumo:
Investigations of Li-7(p,n)Be-7 reactions using Cu and CH primary and LiF secondary targets were performed using the VULCAN laser [C.N. Danson , J. Mod. Opt. 45, 1653 (1997)] with intensities up to 3x10(19) W cm(-2). The neutron yield was measured using CR-39 plastic track detector and the yield was up to 3x10(8) sr(-1) for CH primary targets and up to 2x10(8) sr(-1) for Cu primary targets. The angular distribution of neutrons was measured at various angles and revealed a relatively anisotropic neutron distribution over 180degrees that was greater than the error of measurement. It may be possible to exploit such reactions on high repetition, table-top lasers for neutron radiography. (C) 2004 American Institute of Physics.
Resumo:
Policy-based network management (PBNM) paradigms provide an effective tool for end-to-end resource
management in converged next generation networks by enabling unified, adaptive and scalable solutions
that integrate and co-ordinate diverse resource management mechanisms associated with heterogeneous
access technologies. In our project, a PBNM framework for end-to-end QoS management in converged
networks is being developed. The framework consists of distributed functional entities managed within a
policy-based infrastructure to provide QoS and resource management in converged networks. Within any
QoS control framework, an effective admission control scheme is essential for maintaining the QoS of
flows present in the network. Measurement based admission control (MBAC) and parameter basedadmission control (PBAC) are two commonly used approaches. This paper presents the implementationand analysis of various measurement-based admission control schemes developed within a Java-based
prototype of our policy-based framework. The evaluation is made with real traffic flows on a Linux-based experimental testbed where the current prototype is deployed. Our results show that unlike with classic MBAC or PBAC only schemes, a hybrid approach that combines both methods can simultaneously result in improved admission control and network utilization efficiency
Resumo:
Background: Previous research demonstrates various associations between depression, cardiovascular disease (CVD) incidence and mortality, possibly as a result of the different methodologies used to measure depression and analyse relationships. This analysis investigated the association between depression, CVD incidence (CVDI) and mortality from CVD (MCVD), smoking related conditions (MSRC), and all causes (MALL), in a sample data set, where depression was measured using items from a validated questionnaire and using items derived from the factor analysis of a larger questionnaire, and analyses were conducted based on continuous data and grouped data.
Methods: Data from the PRIME Study (N=9798 men) on depression and 10-year CVD incidence and mortality were analysed using Cox proportional hazards models.
Results: Using continuous data, both measures of depression resulted in the emergence of positive associations between depression and mortality (MCVD, MSRC, MALL). Using grouped data, however, associations between a validated measure of depression and MCVD, and between a measure of depression derived from factor analysis and all measures of mortality were lost.
Limitations: Low levels of depression, low numbers of individuals with high depression and low numbers of outcome events may limit these analyses, but levels are usual for the population studied.
Conclusions: These data demonstrate a possible association between depression and mortality but detecting this association is dependent on the measurement used and method of analysis. Different findings based on methodology present clear problems for the elucidation and determination of relationships. The differences here argue for the use of validated scales where possible and suggest against over-reduction via factor analysis and grouping.
Resumo:
PURPOSE:
To determine the test-retest variability in perimetric, optic disc, and macular thickness parameters in a cohort of treated patients with established glaucoma.
PATIENTS AND METHODS:
In this cohort study, the authors analyzed the imaging studies and visual field tests at the baseline and 6-month visits of 162 eyes of 162 participant in the Glaucoma Imaging Longitudinal Study (GILS). They assessed the difference, expressed as the standard error of measurement, of Humphrey field analyzer II (HFA) Swedish Interactive Threshold Algorithm fast, Heidelberg retinal tomograph (HRT) II, and retinal thickness analyzer (RTA) parameters between the two visits and assumed that this difference was due to measurement variability, not pathologic change. A statistically significant change was defined as twice the standard error of measurement.
RESULTS:
In this cohort of treated glaucoma patients, it was found that statistically significant changes were 3.2 dB for mean deviation (MD), 2.2 for pattern standard deviation (PSD), 0.12 for cup shape measure, 0.26 mm for rim area, and 32.8 microm and 31.8 microm for superior and inferior macular thickness, respectively. On the basis of these values, it was estimated that the number of potential progression events detectable in this cohort by the parameters of MD, PSD, cup shape measure, rim area, superior macular thickness, and inferior macular thickness was 7.5, 6.0, 2.3, 5.7, 3.1, and 3.4, respectively.
CONCLUSIONS:
The variability of the measurements of MD, PSD, and rim area, relative to the range of possible values, is less than the variability of cup shape measure or macular thickness measurements. Therefore, the former measurements may be more useful global measurements for assessing progressive glaucoma damage.
Resumo:
In this paper, we present an inertial-sensor-based monitoring system for measuring the movement of human upper limbs. Two wearable inertial sensors are placed near the wrist and elbow joints, respectively. The measurement drift in segment orientation is dramatically reduced after a Kalman filter is applied to estimate inclinations using accelerations and turning rates from gyroscopes. Using premeasured lengths of the upper and lower arms, we compute the position of the wrist and elbow joints via a proposed kinematic model. Experimental results demonstrate that this new motion capture system, in comparison to an optical motion tracker, possesses an RMS position error of less than 0.009 m, with a drift of less than 0.005 ms-1 in five daily activities. In addition, the RMS angle error is less than 3??. This indicates that the proposed approach has performed well in terms of accuracy and reliability.
Resumo:
An experiment to quantify intra- and interobserver error in anatomical measurements found that interobserver measurements can vary by over 14% of mean specimen length; disparity in measurement increases logarithmically with the number of contributors; instructions did not reduce variation or measurement disparity; scale of the specimen influenced the precision of measurement (relative error increasing with specimen size); different methods of taking a measurement yielded different results, although they did not differ in terms of precision, and topographical complexity of the elements being considered may potentially influence error (error increasing with complexity). These results highlight concerns about introduction of noise and potential bias that should be taken into account when compiling composite datasets and meta-analyses.
Resumo:
A technique for optimizing the efficiency of the sub-map method for large-scale simultaneous localization and mapping (SLAM) is proposed. It optimizes the benefits of the sub-map technique to improve the accuracy and consistency of an extended Kalman filter (EKF)-based SLAM. Error models were developed and engaged to investigate some of the outstanding issues in employing the sub-map technique in SLAM. Such issues include the size (distance) of an optimal sub-map, the acceptable error effect caused by the process noise covariance on the predictions and estimations made within a sub-map, when to terminate an existing sub-map and start a new one and the magnitude of the process noise covariance that could produce such an effect. Numerical results obtained from the study and an error-correcting process were engaged to optimize the accuracy and convergence of the Invariant Information Local Sub-map Filter previously proposed. Applying this technique to the EKF-based SLAM algorithm (a) reduces the computational burden of maintaining the global map estimates and (b) simplifies transformation complexities and data association ambiguities usually experienced in fusing sub-maps together. A Monte Carlo analysis of the system is presented as a means of demonstrating the consistency and efficacy of the proposed technique.