992 resultados para size accuracy
Resumo:
Background: Food portion size estimation involves a complex mental process that may influence food consumption evaluation. Knowing the variables that influence this process can improve the accuracy of dietary assessment. The present study aimed to evaluate the ability of nutrition students to estimate food portions in usual meals and relate food energy content with errors in food portion size estimation. Methods: Seventy-eight nutrition students, who had already studied food energy content, participated in this cross-sectional study on the estimation of food portions, organised into four meals. The participants estimated the quantity of each food, in grams or millilitres, with the food in view. Estimation errors were quantified, and their magnitude were evaluated. Estimated quantities (EQ) lower than 90% and higher than 110% of the weighed quantity (WQ) were considered to represent underestimation and overestimation, respectively. Correlation between food energy content and error on estimation was analysed by the Spearman correlation, and comparison between the mean EQ and WQ was accomplished by means of the Wilcoxon signed rank test (P < 0.05). Results: A low percentage of estimates (18.5%) were considered accurate (+/- 10% of the actual weight). The most frequently underestimated food items were cauliflower, lettuce, apple and papaya; the most often overestimated items were milk, margarine and sugar. A significant positive correlation between food energy density and estimation was found (r = 0.8166; P = 0.0002). Conclusions: The results obtained in the present study revealed a low percentage of acceptable estimations of food portion size by nutrition students, with trends toward overestimation of high-energy food items and underestimation of low-energy items.
Resumo:
Results of two experiments are reported that examined how people respond to rectangular targets of different sizes in simple hitting tasks. If a target moves in a straight line and a person is constrained to move along a linear track oriented perpendicular to the targetrsquos motion, then the length of the target along its direction of motion constrains the temporal accuracy and precision required to make the interception. The dimensions of the target perpendicular to its direction of motion place no constraints on performance in such a task. In contrast, if the person is not constrained to move along a straight track, the targetrsquos dimensions may constrain the spatial as well as the temporal accuracy and precision. The experiments reported here examined how people responded to targets of different vertical extent (height): the task was to strike targets that moved along a straight, horizontal path. In experiment 1 participants were constrained to move along a horizontal linear track to strike targets and so target height did not constrain performance. Target height, length and speed were co-varied. Movement time (MT) was unaffected by target height but was systematically affected by length (briefer movements to smaller targets) and speed (briefer movements to faster targets). Peak movement speed (Vmax) was influenced by all three independent variables: participants struck shorter, narrower and faster targets harder. In experiment 2, participants were constrained to move in a vertical plane normal to the targetrsquos direction of motion. In this task target height constrains the spatial accuracy required to contact the target. Three groups of eight participants struck targets of different height but of constant length and speed, hence constant temporal accuracy demand (different for each group, one group struck stationary targets = no temporal accuracy demand). On average, participants showed little or no systematic response to changes in spatial accuracy demand on any dependent measure (MT, Vmax, spatial variable error). The results are interpreted in relation to previous results on movements aimed at stationary targets in the absence of visual feedback.
Resumo:
PURPOSE. To evaluate the effect of disease severity and optic disc size on the diagnostic accuracies of optic nerve head (ONH), retinal nerve fiber layer (RNFL), and macular parameters with RTVue (Optovue, Fremont, CA) spectral domain optical coherence tomography (SDOCT) in glaucoma. METHODS. 110 eyes of 62 normal subjects and 193 eyes of 136 glaucoma patients from the Diagnostic Innovations in Glaucoma Study underwent ONH, RNFL, and macular imaging with RTVue. Severity of glaucoma was based on visual field index (VFI) values from standard automated perimetry. Optic disc size was based on disc area measurement using the Heidelberg Retina Tomograph II (Heidelberg Engineering, Dossenheim, Germany). Influence of disease severity and disc size on the diagnostic accuracy of RTVue was evaluated by receiver operating characteristic (ROC) and logistic regression models. RESULTS. Areas under ROC curve (AUC) of all scanning areas increased (P < 0.05) as disease severity increased. For a VFI value of 99%, indicating early damage, AUCs for rim area, average RNLI thickness, and ganglion cell complex-root mean square were 0.693, 0.799, and 0.779, respectively. For a VFI of 70%, indicating severe damage, corresponding AUCs were 0.828, 0.985, and 0.992, respectively. Optic disc size did not influence the AUCs of any of the SDOCT scanning protocols of RTVue (P > 0.05). Sensitivity of the rim area increased and specificity decreased in large optic discs. CONCLUSIONS. Diagnostic accuracies of RTVue scanning protocols for glaucoma were significantly influenced by disease severity. Sensitivity of the rim area increased in large optic discs at the expense of specificity. (Invest Ophthalmol Vis Sci. 2011;92:1290-1296) DOI:10.1167/iovs.10-5516
Resumo:
Introduction: As part of the MicroArray Quality Control (MAQC)-II project, this analysis examines how the choice of univariate feature-selection methods and classification algorithms may influence the performance of genomic predictors under varying degrees of prediction difficulty represented by three clinically relevant endpoints. Methods: We used gene-expression data from 230 breast cancers (grouped into training and independent validation sets), and we examined 40 predictors (five univariate feature-selection methods combined with eight different classifiers) for each of the three endpoints. Their classification performance was estimated on the training set by using two different resampling methods and compared with the accuracy observed in the independent validation set. Results: A ranking of the three classification problems was obtained, and the performance of 120 models was estimated and assessed on an independent validation set. The bootstrapping estimates were closer to the validation performance than were the cross-validation estimates. The required sample size for each endpoint was estimated, and both gene-level and pathway-level analyses were performed on the obtained models. Conclusions: We showed that genomic predictor accuracy is determined largely by an interplay between sample size and classification difficulty. Variations on univariate feature-selection methods and choice of classification algorithm have only a modest impact on predictor performance, and several statistically equally good predictors can be developed for any given classification problem.
Resumo:
PURPOSE: To investigate the accuracy of 1.0T Magnetic Resonance Imaging (MRI) to measure the ventricular size in experimental hydrocephalus in pup rats. METHODS: Wistar rats were subjected to hydrocephalus by intracisternal injection of 20% kaolin (n=13). Ten rats remained uninjected to be used as controls. At the endpoint of experiment animals were submitted to MRI of brain and killed. The ventricular size was assessed using three measures: ventricular ratio (VR), the cortical thickness (Cx) and the ventricles area (VA), performed on photographs of anatomical sections and MRI. RESULTS: The images obtained through MR present enough quality to show the lateral ventricular cavities but not to demonstrate the difference between the cortex and the white matter, as well as the details of the deep structures of the brain. There were no statistically differences between the measures on anatomical sections and MRI of VR and Cx (p=0.9946 and p=0.5992, respectively). There was difference between VA measured on anatomical sections and MRI (p<0.0001). CONCLUSION: The parameters obtained through 1.0T MRI were sufficient in quality to individualize the ventricular cavities and the cerebral cortex, and to calculate the ventricular ratio in hydrocephalus rats when compared to their respective anatomic slice.
Resumo:
Objective: This ex vivo study evaluated the effect of pre-flaring and file size on the accuracy of the Root ZX and Novapex electronic apex locators (EALs). Material and methods: The actual working length (WL) was set 1 mm short of the apical foramen in the palatal root canals of 24 extracted maxillary molars. The teeth were embedded in an alginate mold, and two examiners performed the electronic measurements using #10, #15, and #20 K-files. The files were inserted into the root canals until the "0.0" or "APEX" signals were observed on the LED or display screens for the Novapex and Root ZX, respectively, retracting to the 1.0 mark. The measurements were repeated after the pre-flaring using the S1 and SX Pro-Taper instruments. Two measurements were performed for each condition and the means were used. Intra-class correlation coefficients (ICCs) were calculated to verify the intra-and inter-examiner agreement. The mean differences between the WL and electronic length values were analyzed by the three-way ANOVA test (p<0.05). Results: ICCs were high (>0.8) and the results demonstrated a similar accuracy for both EALs (p>0.05). Statistically significant accurate measurements were verified in the pre-flared canals, except for the Novapex using a #20 K-file. Conclusions: The tested EALs showed acceptable accuracy, whereas the pre-flaring procedure revealed a more significant effect than the used file size.
Resumo:
The present study compared the accuracy of three electronic apex locators (EALs) - Elements Diagnostic®, Root ZX® and Apex DSP® - in the presence of different irrigating solutions (0.9% saline solution and 1% sodium hypochlorite). The electronic measurements were carried out by three examiners, using twenty extracted human permanent maxillary central incisors. A size 10 K file was introduced into the root canals until reaching the 0.0 mark, and was subsequently retracted to the 1.0 mark. The gold standard (GS) measurement was obtained by combining visual and radiographic methods, and was set 1 mm short of the apical foramen. Electronic length values closer to the GS (± 0.5 mm) were considered as accurate measures. Intraclass correlation coefficients (ICCs) were used to verify inter-examiner agreement. The comparison among the EALs was performed using the McNemar and Kruskal-Wallis tests (p < 0.05). The ICCs were generally high, ranging from 0.8859 to 0.9657. Similar results were observed for the percentage of electronic measurements closer to the GS obtained with the Elements Diagnostic® and the Root ZX® EALs (p > 0.05), independent of the irrigating solutions used. The measurements taken with these two EALs were more accurate than those taken with Apex DSP®, regardless of the irrigating solution used (p < 0.05). It was concluded that Elements Diagnostic® and Root ZX® apex locators are able to locate the cementum-dentine junction more precisely than Apex DSP®. The presence of irrigating solutions does not interfere with the performance of the EALs.
Resumo:
We explored possible effects of negative covariation among finger forces in multifinger accurate force production tasks on the classical Fitts's speed-accuracy trade-off. Healthy subjects performed cyclic force changes between pairs of targets ""as quickly and accurately as possible."" Tasks with two force amplitudes and six ratics of force amplitude to target size were performed by each of the four fingers of the right hand and four finger combinations. There was a close to linear relation between movement time and the log-transformed ratio of target amplitude to target size across all finger combinations. There was a close to linear relation between standard deviation of force amplitude and movement time. There were no differences between the performance of either of the two ""radial"" fingers (index and middle) and the multifinger tasks. The ""ulnar"" fingers (little and ring) showed higher indices of variability and longer movement times as compared with both ""radial"" fingers and multifinger combinations. We conclude that potential effects of the negative covariation and also of the task-sharing across a set of fingers are counterbalanced by an increase in individual finger force variability in multifinger tasks as compared with single-finger tasks. The results speak in favor of a feed-forward model of multifinger synergies. They corroborate a hypothesis that multifinger synergies are created not to improve overall accuracy, but to allow the system larger flexibility, for example to deal with unexpected perturbations and concomitant tasks.
Resumo:
Bioelectrical impedance analysis (BIA) offers the potential for a simple, portable and relatively inexpensive technique for the in vivo measurement of total body water (TBW). The potential of BIA as a technique of body composition analysis is even greater when one considers that body water can be used as a surrogate measure of lean body mass. However, BIA has not found universal acceptance even with the introduction of multi-frequency BIA (MFBIA) which, potentially, may improve the predictive accuracy of the measurement. There are a number of reasons for this lack of acceptance, although perhaps the major reason is that no single algorithm has been developed which can be applied to all subject groups. This may be due, in part, to the commonly used wrist-to-ankle protocol which is not indicated by the basic theory of bioimpedance, where the body is considered as five interconnecting cylinders. Several workers have suggested the use of segmental BIA measurements to provide a protocol more in keeping with basic theory. However, there are other difficulties associated with the application of BIA, such as effects of hydration and ion status, posture and fluid distribution. A further putative advantage of MFBIA is the independent assessment not only of TBW but also of the extracellular fluid volume (ECW), hence heralding the possibility of,being able to assess the fluid distribution between these compartments. Results of studies in this area have been, to date, mixed. Whereas strong relationships of impedance values at low frequencies with ECW, and at high frequencies with TBW, have been reported, changes in impedance are not always well correlated with changes in the size of the fluid compartments (assessed by alternative and more direct means) in pathological conditions. Furthermore, the theoretical advantages of Cole-Cole modelling over selected frequency prediction have not always been apparent. This review will consider the principles, methodology and applications of BIA. The principles and methodology will,be considered in relation to the basic theory of BIA and difficulties experienced in its application. The relative merits of single and multiple frequency BIA will be addressed, with particular attention to the latter's role in the assessment of compartmental fluid volumes. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
The use of cell numbers rather than mass to quantify the size of the biotic phase in animal cell cultures causes several problems. First, the cell size varies with growth conditions, thus yields expressed in terms of cell numbers cannot be used in the normal mass balance sense. Second, experience from microbial systems shows that cell number dynamics lag behind biomass dynamics. This work demonstrates that this lag phenomenon also occurs in animal cell culture. Both the lag phenomenon and the variation in cell size are explained using a simple model of the cell cycle. The basis for the model is that onset of DNA synthesis requires accumulation of G1 cyclins to a prescribed level. This requirement is translated into a requirement for a cell to reach a critical size before commencement of DNA synthesis. A slower gl-owing cell will spend more time in G1 before reaching the critical mass. In contrast, the period between onset of DNA synthesis and mitosis, tau(B), is fixed. The two parameters in the model, the critical size and tau(B), were determined from eight steady-state measurements of mean cell size in a continuous hybridoma culture. Using these parameters, it was possible to predict with reasonable accuracy the transient behavior in a separate shift-up culture, i.e., a culture where cells were transferred from a lean environment to a rich environment. The implications for analyzing experimental data for animal cell culture are discussed. (C) 1997 John Wiley & Sons, Inc.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Background. Although digital and videotaped images are known to be comparable for the evaluation of left ventricular function, their relative accuracy for assessment of more complex anatomy is unclear. We sought to compare reading time, storage costs, and concordance of video and digital interpretations across multiple observers and sites. Methods. One hundred one patients with valvular (90 mitral, 48 aortic, 80 tricuspid) disease were selected prospectively, and studies were stored according to video and standardized digital protocols. The same reviewer interpreted video and digital images independently and at different times with the use of a standard report form to evaluate 40 items (eg, severity of stenosis or regurgitation, leaflet thickening, and calcification) as normal or mildly, moderately, or severely abnormal Concordance between modalities was expressed at kappa Major discordance (difference of >1 level of severity) was ascribed to the modality that gave the lesser severity. CD-ROM was used to store digital data (20:1 lossy compression), and super-VHS video-tape was used to store video data The reading time and storage costs for each modality were compared Results. Measured parameters were highly concordant (ejection fraction was 52% +/- 13% by both). Major discordance was rare, and lesser values were reported with digital rather than video interpretation in the categories of aortic and mitral valve thicken ing (1% to 2%) and severity of mitral regurgitation (2%). Digital reading time was 6.8 +/- 2.4 minutes, 38% shorter than with video (11.0 +/- 3.0, range 8 to 22 minutes, P < .001). Compressed digital studies had an average size of 60 <plus/minus> 14 megabytes (range 26 to 96 megabytes). Storage cost for video was A$0.62 per patient (18 studies per tape, total cost A$11.20), compared with A$0.31 per patient for digital storage (8 studies per CD-ROM, total cost A$2.50). Conclusion. Digital and video interpretation were highly concordant; in the few cases of major discordance, the digital scores were lower, perhaps reflecting undersampling. Use of additional views and longer clips may be indicated to minimize discordance with video in patients with complex problems. Digital interpretation offers a significant reduction in reading times and the cost of archiving.
Resumo:
Colorimetric analysis of roadway dust is currently a method for monitoring the incombustible content of mine roadways within Australian underground coal mines. To test the accuracy of this method, and to eliminate errors of judgement introduced by human operators in the analysis procedure, a number of samples were tested using scanning software to determine absolute greyscale values. High variability and unpredictability of results was noted during this testing, indicating that colorimetric testing is sensitive to parameters within the mine that are not currently reproduced in the preparation of reference samples. This was linked to the dependence of colour on particle surface area, and hence also to the size distribution of coal particles within the mine environment. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The purpose of this study was to determine the prognostic accuracy of perfusion computed tomography (CT), performed at the time of emergency room admission, in acute stroke patients. Accuracy was determined by comparison of perfusion CT with delayed magnetic resonance (MR) and by monitoring the evolution of each patient's clinical condition. Twenty-two acute stroke patients underwent perfusion CT covering four contiguous 10mm slices on admission, as well as delayed MR, performed after a median interval of 3 days after emergency room admission. Eight were treated with thrombolytic agents. Infarct size on the admission perfusion CT was compared with that on the delayed diffusion-weighted (DWI)-MR, chosen as the gold standard. Delayed magnetic resonance angiography and perfusion-weighted MR were used to detect recanalization. A potential recuperation ratio, defined as PRR = penumbra size/(penumbra size + infarct size) on the admission perfusion CT, was compared with the evolution in each patient's clinical condition, defined by the National Institutes of Health Stroke Scale (NIHSS). In the 8 cases with arterial recanalization, the size of the cerebral infarct on the delayed DWI-MR was larger than or equal to that of the infarct on the admission perfusion CT, but smaller than or equal to that of the ischemic lesion on the admission perfusion CT; and the observed improvement in the NIHSS correlated with the PRR (correlation coefficient = 0.833). In the 14 cases with persistent arterial occlusion, infarct size on the delayed DWI-MR correlated with ischemic lesion size on the admission perfusion CT (r = 0.958). In all 22 patients, the admission NIHSS correlated with the size of the ischemic area on the admission perfusion CT (r = 0.627). Based on these findings, we conclude that perfusion CT allows the accurate prediction of the final infarct size and the evaluation of clinical prognosis for acute stroke patients at the time of emergency evaluation. It may also provide information about the extent of the penumbra. Perfusion CT could therefore be a valuable tool in the early management of acute stroke patients.
Resumo:
Predictive species distribution modelling (SDM) has become an essential tool in biodiversity conservation and management. The choice of grain size (resolution) of environmental layers used in modelling is one important factor that may affect predictions. We applied 10 distinct modelling techniques to presence-only data for 50 species in five different regions, to test whether: (1) a 10-fold coarsening of resolution affects predictive performance of SDMs, and (2) any observed effects are dependent on the type of region, modelling technique, or species considered. Results show that a 10 times change in grain size does not severely affect predictions from species distribution models. The overall trend is towards degradation of model performance, but improvement can also be observed. Changing grain size does not equally affect models across regions, techniques, and species types. The strongest effect is on regions and species types, with tree species in the data sets (regions) with highest locational accuracy being most affected. Changing grain size had little influence on the ranking of techniques: boosted regression trees remain best at both resolutions. The number of occurrences used for model training had an important effect, with larger sample sizes resulting in better models, which tended to be more sensitive to grain. Effect of grain change was only noticeable for models reaching sufficient performance and/or with initial data that have an intrinsic error smaller than the coarser grain size.