908 resultados para Error of measurement
Resumo:
Physical fitness can be evaluated in competitive and school sports with different field tests under different conditions and goals. To produce valid results, a field test must be practical and reach high standards of test criteria (objectivity, reliability, validity). The purpose of this study was to investigate the test criteria and the practicability of a group of field tests called «SUISSE Sport Test Konzept Basis Feldtestbatterie». For 20-m sprint, ventral trunk muscle test, standing long jump, 2-kg medicine ball shot, obstacle course and cooper-test, test quality and practicability were evaluated. 221 children and adolescents from competitive sports and different school levels took part in the study. According to school level, they were divided into 3 groups (P: 7–11.5 y, S1: 11.6–15.5 y, S2: 15.6–21.8 y). Objectivity was tested for time or distance measurement in all tests as well as for error rating in obstacle test. For reliability measurement, 162 subjects performed the field tests twice within a few weeks. For validity results of standing long jump were compared with counter movement jump performance on a force plate. Correlation analysis was performed and level of significance was set for p < 0.05. For accuracy standard error was calculated. All tests achieved sufficient to excellent objectiv - ity with correlation-coefficient (r) lying between 0.85 and 0.99. Reliability was very good (r = 0.84–0.97). In cooper- and trunk test, reliability was higher for athletes than for pupils (trunk test: r = 0.95 vs. r = 0.62, cooper-test: r = 0.90 vs. r = 0.78). In those tests the reliability decreases with increasing age (cooper-test: P: r = 0.84, S1: r = 0.69, S2: r = 0.52; trunk-test: P: r = 0.69, S1: r = 0.71; S2: r = 0.39). Validity for standing long jump was good (r = 0.75–0.86). The standard error of the mean was between 4–8%, with the exception for cooper-test (athletes: 6%, pupils: 11%) and trunk test (athletes: 14%, pupils: 46%). The results show that the evaluated group of field tests is a practicable, objective and reliable tool to determine physical skills in young athletes as well as in a scholar setting over a broad age range. Most of the tests achieved the test criteria with the grades good to excellent. The lower coefficient of reliability for cooper- and trunk test by the pupils could be explained by motivational problems in this setting. For up to 20 subjects, a tester can accomplish the tests within 3 h. Finally, age-dependent grades were elaborated
Resumo:
Using a convenient and fast HPLC procedure we determined serum concentrations of the fungistatic agent 5-fluorocytosine (5-FC) in 375 samples from 60 patients treated with this drug. The mean trough concentration (n = 127) was 64.3 mg/l (range: 11.8-208.0 mg/l), the mean peak concentration (n = 122) was 99.9 mg/l (range: 25.6-263.8 mg/l), the mean nonpeak/nontrough concentration (n = 126) was 80.1 mg/l (range: 10.5-268.0 mg/l). Totally 134 (35.7%) samples were outside the therapeutic range (25-100 mg/l), 108 (28.8%) being too high, 26 (6.9%) being too low. Forty-four (73%) patients showed 5-FC serum concentrations outside the therapeutic range at least once during the treatment course. In a prospective study we performed 65 dosage predictions on 30 patients by use of a 3-point method previously developed for aminoglycoside dosage adaptation. The mean absolute prediction error of the dosage adaptation was +0.7 mg/l (range: -26.0 to +28.0 mg/l). The root mean square prediction error was 10.7 mg/l. The mean predicted concentration (65.3 mg/l) agreed very well with the mean measured concentration (64.6 mg/l). The frequency distribution of 5-FC serum concentrations indicates that 5-FC monitoring is important. The applied pharmacokinetic method allows individual adaptations of 5-FC dosage with a clinically acceptable prediction error.
Resumo:
This paper presents a system for 3-D reconstruction of a patient-specific surface model from calibrated X-ray images. Our system requires two X-ray images of a patient with one acquired from the anterior-posterior direction and the other from the axial direction. A custom-designed cage is utilized in our system to calibrate both images. Starting from bone contours that are interactively identified from the X-ray images, our system constructs a patient-specific surface model of the proximal femur based on a statistical model based 2D/3D reconstruction algorithm. In this paper, we present the design and validation of the system with 25 bones. An average reconstruction error of 0.95 mm was observed.
Resumo:
In order to assess the clinical relevance of a slice-to-volume registration algorithm, this technique was compared to manual registration. Reformatted images obtained from a diagnostic CT examination of the lower abdomen were reviewed and manually registered by 41 individuals. The results were refined by the algorithm. Furthermore, a fully automatic registration of the single slices to the whole CT examination, without manual initialization, was also performed. The manual registration error for rotation and translation was found to be 2.7+/-2.8 degrees and 4.0+/-2.5 mm. The automated registration algorithm significantly reduced the registration error to 1.6+/-2.6 degrees and 1.3+/-1.6 mm (p = 0.01). In 3 of 41 (7.3%) registration cases, the automated registration algorithm failed completely. On average, the time required for manual registration was 213+/-197 s; automatic registration took 82+/-15 s. Registration was also performed without any human interaction. The resulting registration error of the algorithm without manual pre-registration was found to be 2.9+/-2.9 degrees and 1.1+/-0.2 mm. Here, a registration took 91+/-6 s, on average. Overall, the automated registration algorithm improved the accuracy of manual registration by 59% in rotation and 325% in translation. The absolute values are well within a clinically relevant range.
Resumo:
Recent brain imaging work has expanded our understanding of the mechanisms of perceptual, cognitive, and motor functions in human subjects, but research into the cerebral control of emotional and motivational function is at a much earlier stage. Important concepts and theories of emotion are briefly introduced, as are research designs and multimodal approaches to answering the central questions in the field. We provide a detailed inspection of the methodological and technical challenges in assessing the cerebral correlates of emotional activation, perception, learning, memory, and emotional regulation behavior in healthy humans. fMRI is particularly challenging in structures such as the amygdala as it is affected by susceptibility-related signal loss, image distortion, physiological and motion artifacts and colocalized Resting State Networks (RSNs). We review how these problems can be mitigated by using optimized echo-planar imaging (EPI) parameters, alternative MR sequences, and correction schemes. High-quality data can be acquired rapidly in these problematic regions with gradient compensated multiecho EPI or high resolution EPI with parallel imaging and optimum gradient directions, combined with distortion correction. Although neuroimaging studies of emotion encounter many difficulties regarding the limitations of measurement precision, research design, and strategies of validating neuropsychological emotion constructs, considerable improvement in data quality and sensitivity to subtle effects can be achieved. The methods outlined offer the prospect for fMRI studies of emotion to provide more sensitive, reliable, and representative models of measurement that systematically relate the dynamics of emotional regulation behavior with topographically distinct patterns of activity in the brain. This will provide additional information as an aid to assessment, categorization, and treatment of patients with emotional and personality disorders.
Resumo:
OBJECTIVE: To investigate the relationship between social support and coagulation parameter reactivity to mental stress in men and to determine if norepinephrine is involved. Lower social support is associated with higher basal coagulation activity and greater norepinephrine stress reactivity, which in turn, is linked with hypercoagulability. However, it is not known if low social support interacts with stress to further increase coagulation reactivity or if norepinephrine affects this association. These findings may be important for determining if low social support influences thrombosis and possible acute coronary events in response to acute stress. We investigated the relationship between social support and coagulation parameter reactivity to mental stress in men and determined if norepinephrine is involved. METHODS: We measured perceived social support in 63 medication-free nonsmoking men (age (mean +/- standard error of the mean) = 36.7 +/- 1.7 years) who underwent an acute standardized psychosocial stress task combining public speaking and mental arithmetic in front of an audience. We measured plasma D-dimer, fibrinogen, clotting Factor VII activity (FVII:C), and plasma norepinephrine at rest as well as immediately after stress and 20 minutes after stress. RESULTS: Independent of body mass index, mean arterial pressure, and age, lower social support was associated with higher D-dimer and fibrinogen levels at baseline (p < .012) and with greater increases in fibrinogen (beta = -0.36, p = .001; DeltaR(2) = .12), and D-dimer (beta = -0.21, p = .017; DeltaR(2) = .04), but not in FVII:C (p = .83) from baseline to 20 minutes after stress. General linear models revealed significant main effects of social support and stress on fibrinogen, D-dimer, and norepinephrine (p < .035). Controlling for norepinephrine did not change the significance of the reported associations between social support and the coagulation measures D-dimer and fibrinogen. CONCLUSIONS: Our results suggest that lower social support is associated with greater coagulation activity before and after acute stress, which was unrelated to norepinephrine reactivity.
Resumo:
In order to predict which ecosystem functions are most at risk from biodiversity loss, meta-analyses have generalised results from biodiversity experiments over different sites and ecosystem types. In contrast, comparing the strength of biodiversity effects across a large number of ecosystem processes measured in a single experiment permits more direct comparisons. Here, we present an analysis of 418 separate measures of 38 ecosystem processes. Overall, 45 % of processes were significantly affected by plant species richness, suggesting that, while diversity affects a large number of processes not all respond to biodiversity. We therefore compared the strength of plant diversity effects between different categories of ecosystem processes, grouping processes according to the year of measurement, their biogeochemical cycle, trophic level and compartment (above- or belowground) and according to whether they were measures of biodiversity or other ecosystem processes, biotic or abiotic and static or dynamic. Overall, and for several individual processes, we found that biodiversity effects became stronger over time. Measures of the carbon cycle were also affected more strongly by plant species richness than were the measures associated with the nitrogen cycle. Further, we found greater plant species richness effects on measures of biodiversity than on other processes. The differential effects of plant diversity on the various types of ecosystem processes indicate that future research and political effort should shift from a general debate about whether biodiversity loss impairs ecosystem functions to focussing on the specific functions of interest and ways to preserve them individually or in combination.
Resumo:
High-resolution, well-calibrated records of lake sediments are critically important for quantitative climate reconstructions, but they remain a methodological and analytical challenge. While several comprehensive paleotemperature reconstructions have been developed across Europe, only a few quantitative high-resolution studies exist for precipitation. Here we present a calibration and verification study of lithoclastic sediment proxies from proglacial Lake Oeschinen (46°30′N, 7°44′E, 1,580 m a.s.l., north–west Swiss Alps) that are sensitive to rainfall for the period AD 1901–2008. We collected two sediment cores, one in 2007 and another in 2011. The sediments are characterized by two facies: (A) mm-laminated clastic varves and (B) turbidites. The annual character of the laminae couplets was confirmed by radiometric dating (210Pb, 137Cs) and independent flood-layer chronomarkers. Individual varves consist of a dark sand-size spring-summer layer enriched in siliciclastic minerals and a lighter clay-size calcite-rich winter layer. Three subtypes of varves are distinguished: Type I with a 1–1.5 mm fining upward sequence; Type II with a distinct fine-sand base up to 3 mm thick; and Type III containing multiple internal microlaminae caused by individual summer rainstorm deposits. Delta-fan surface samples and sediment trap data fingerprint different sediment source areas and transport processes from the watershed and confirm the instant response of sediment flux to rainfall and erosion. Based on a highly accurate, precise and reproducible chronology, we demonstrate that sediment accumulation (varve thickness) is a quantitative predictor for cumulative boreal alpine spring (May–June) and spring/summer (May–August) rainfall (rMJ = 0.71, rMJJA = 0.60, p < 0.01). Bootstrap-based verification of the calibration model reveals a root mean squared error of prediction (RMSEPMJ = 32.7 mm, RMSEPMJJA = 57.8 mm) which is on the order of 10–13 % of mean MJ and MJJA cumulative precipitation, respectively. These results highlight the potential of the Lake Oeschinen sediments for high-resolution reconstructions of past rainfall conditions in the northern Swiss Alps, central and eastern France and south-west Germany.
Resumo:
Identifying and comparing different steady states is an important task for clinical decision making. Data from unequal sources, comprising diverse patient status information, have to be interpreted. In order to compare results an expressive representation is the key. In this contribution we suggest a criterion to calculate a context-sensitive value based on variance analysis and discuss its advantages and limitations referring to a clinical data example obtained during anesthesia. Different drug plasma target levels of the anesthetic propofol were preset to reach and maintain clinically desirable steady state conditions with target controlled infusion (TCI). At the same time systolic blood pressure was monitored, depth of anesthesia was recorded using the bispectral index (BIS) and propofol plasma concentrations were determined in venous blood samples. The presented analysis of variance (ANOVA) is used to quantify how accurately steady states can be monitored and compared using the three methods of measurement.
Resumo:
Abstract. A number of studies have shown that Fourier transform infrared spectroscopy (FTIRS) can be applied to quantitatively assess lacustrine sediment constituents. In this study, we developed calibration models based on FTIRS for the quantitative determination of biogenic silica (BSi; n = 420; gradient: 0.9–56.5 %), total organic carbon (TOC; n = 309; gradient: 0–2.9 %), and total inorganic carbon (TIC; n = 152; gradient: 0–0.4 %) in a 318 m-long sediment record with a basal age of 3.6 million years from Lake El’gygytgyn, Far East Russian Arctic. The developed partial least squares (PLS) regression models yield high cross-validated (CV) R2 CV = 0.86–0.91 and low root mean square error of crossvalidation (RMSECV) (3.1–7.0% of the gradient for the different properties). By applying these models to 6771 samples from the entire sediment record, we obtained detailed insight into bioproductivity variations in Lake El’gygytgyn throughout the middle to late Pliocene and Quaternary. High accumulation rates of BSi indicate a productivity maximum during the middle Pliocene (3.6–3.3 Ma), followed by gradually decreasing rates during the late Pliocene and Quaternary. The average BSi accumulation during the middle Pliocene was �3 times higher than maximum accumulation rates during the past 1.5 million years. The indicated progressive deterioration of environmental and climatic conditions in the Siberian Arctic starting at ca. 3.3 Ma is consistent with the first occurrence of glacial periods and the finally complete establishment of glacial–interglacial cycles during the Quaternary.
Resumo:
The COSMIC-2 mission is a follow-on mission of the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) with an upgraded payload for improved radio occultation (RO) applications. The objective of this paper is to develop a near-real-time (NRT) orbit determination system, called NRT National Chiao Tung University (NCTU) system, to support COSMIC-2 in atmospheric applications and verify the orbit product of COSMIC. The system is capable of automatic determinations of the NRT GPS clocks and LEO orbit and clock. To assess the NRT (NCTU) system, we use eight days of COSMIC data (March 24-31, 2011), which contain a total of 331 GPS observation sessions and 12 393 RO observable files. The parallel scheduling for independent GPS and LEO estimations and automatic time matching improves the computational efficiency by 64% compared to the sequential scheduling. Orbit difference analyses suggest a 10-cm accuracy for the COSMIC orbits from the NRT (NCTU) system, and it is consistent as the NRT University Corporation for Atmospheric Research (URCA) system. The mean velocity accuracy from the NRT orbits of COSMIC is 0.168 mm/s, corresponding to an error of about 0.051 μrad in the bending angle. The rms differences in the NRT COSMIC clock and in GPS clocks between the NRT (NCTU) and the postprocessing products are 3.742 and 1.427 ns. The GPS clocks determined from a partial ground GPS network [from NRT (NCTU)] and a full one [from NRT (UCAR)] result in mean rms frequency stabilities of 6.1E-12 and 2.7E-12, respectively, corresponding to range fluctuations of 5.5 and 2.4 cm and bending angle errors of 3.75 and 1.66 μrad .
Resumo:
OBJECTIVE Angiographic C-arm CT may allow performing percutaneous stereotactic tumor ablations in the interventional radiology suite. Our purpose was to evaluate the accuracy of using C-arm CT for single and multimodality image fusions and to compare the targeting accuracy of liver lesions with the reference standard of MDCT. MATERIALS AND METHODS C-arm CT and MDCT scans were obtained of a nonrigid rapid prototyping liver phantom containing five 1-mm targets that were placed under skin-simulating deformable plastic foam. Target registration errors of image fusion were evaluated for single-modality and multimodality image fusions. A navigation system and stereotactic aiming device were used to evaluate target positioning errors on postinterventional scans with the needles in place fused with the C-arm CT or MDCT planning images. RESULTS Target registration error of the image fusion showed no significant difference (p > 0.05) between both modalities. In five series with a total of 25 punctures for each modality, the lateral target positioning error (i.e., the lateral distance between the needle tip and the planned trajectory) was similar for C-arm CT (mean [± SD], 1.6 ± 0.6 mm) and MDCT (1.82 ± .97 mm) (p = 0.33). CONCLUSION In a nonrigid liver phantom, angiographic C-arm CT may provide similar image fusion accuracy for comparison of intra- and postprocedure control images with the planning images and enables stereotactic targeting accuracy similar to that of MDCT.
Resumo:
In situ diffusion experiments are performed in geological formations at underground research laboratories to overcome the limitations of laboratory diffusion experiments and investigate scale effects. Tracer concentrations are monitored at the injection interval during the experiment (dilution data) and measured from host rock samples around the injection interval at the end of the experiment (overcoring data). Diffusion and sorption parameters are derived from the inverse numerical modeling of the measured tracer data. The identifiability and the uncertainties of tritium and Na-22(+) diffusion and sorption parameters are studied here by synthetic experiments having the same characteristics as the in situ diffusion and retention (DR) experiment performed on Opalinus Clay. Contrary to previous identifiability analyses of in situ diffusion experiments, which used either dilution or overcoring data at approximate locations, our analysis of the parameter identifiability relies simultaneously on dilution and overcoring data, accounts for the actual position of the overcoring samples in the claystone, uses realistic values of the standard deviation of the measurement errors, relies on model identification criteria to select the most appropriate hypothesis about the existence of a borehole disturbed zone and addresses the effect of errors in the location of the sampling profiles. The simultaneous use of dilution and overcoring data provides accurate parameter estimates in the presence of measurement errors, allows the identification of the right hypothesis about the borehole disturbed zone and diminishes other model uncertainties such as those caused by errors in the volume of the circulation system and the effective diffusion coefficient of the filter. The proper interpretation of the experiment requires the right hypothesis about the borehole disturbed zone. A wrong assumption leads to large estimation errors. The use of model identification criteria helps in the selection of the best model. Small errors in the depth of the overcoring samples lead to large parameter estimation errors. Therefore, attention should be paid to minimize the errors in positioning the depth of the samples. The results of the identifiability analysis do not depend on the particular realization of random numbers. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Perceptual learning is a training induced improvement in performance. Mechanisms underlying the perceptual learning of depth discrimination in dynamic random dot stereograms were examined by assessing stereothresholds as a function of decorrelation. The inflection point of the decorrelation function was defined as the level of decorrelation corresponding to 1.4 times the threshold when decorrelation is 0%. In general, stereothresholds increased with increasing decorrelation. Following training, stereothresholds and standard errors of measurement decreased systematically for all tested decorrelation values. Post training decorrelation functions were reduced by a multiplicative constant (approximately 5), exhibiting changes in stereothresholds without changes in the inflection points. Disparity energy model simulations indicate that a post-training reduction in neuronal noise can sufficiently account for the perceptual learning effects. In two subjects, learning effects were retained over a period of six months, which may have application for training stereo deficient subjects.
Resumo:
The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^