980 resultados para size accuracy
Resumo:
[EN] The objective of this study was to determine whether a short training program, using real foods, would decreased their portion-size estimation errors after training. 90 student volunteers (20.18±0.44 y old) of the University of the Basque Country (Spain) were trained in observational techniques and tested in food-weight estimation during and after a 3-hour training period. The program included 57 commonly consumed foods that represent a variety of forms (125 different shapes). Estimates of food weight were compared with actual weights. Effectiveness of training was determined by examining change in the absolute percentage error for all observers and over all foods over time. Data were analyzed using SPSS vs. 13.0. The portion-size errors decreased after training for most of the foods. Additionally, the accuracy of their estimates clearly varies by food group and forms. Amorphous was the food type estimated least accurately both before and after training. Our findings suggest that future dietitians can be trained to estimate quantities by direct observation across a wide range of foods. However this training may have been too brief for participants to fully assimilate the application.
Resumo:
A pragmatic method for assessing the accuracy and precision of a given processing pipeline required for converting computed tomography (CT) image data of bones into representative three dimensional (3D) models of bone shapes is proposed. The method is based on coprocessing a control object with known geometry which enables the assessment of the quality of resulting 3D models. At three stages of the conversion process, distance measurements were obtained and statistically evaluated. For this study, 31 CT datasets were processed. The final 3D model of the control object contained an average deviation from reference values of −1.07±0.52 mm standard deviation (SD) for edge distances and −0.647±0.43 mm SD for parallel side distances of the control object. Coprocessing a reference object enables the assessment of the accuracy and precision of a given processing pipeline for creating CTbased 3D bone models and is suitable for detecting most systematic or human errors when processing a CT-scan. Typical errors have about the same size as the scan resolution.
Resumo:
As part of a wider study to develop an ecosystem-health monitoring program for wadeable streams of south-eastern Queensland, Australia, comparisons were made regarding the accuracy, precision and relative efficiency of single-pass backpack electrofishing and multiple-pass electrofishing plus supplementary seine netting to quantify fish assemblage attributes at two spatial scales (within discrete mesohabitat units and within stream reaches consisting of multiple mesohabitat units). The results demonstrate that multiple-pass electrofishing plus seine netting provide more accurate and precise estimates of fish species richness, assemblage composition and species relative abundances in comparison to single-pass electrofishing alone, and that intensive sampling of three mesohabitat units (equivalent to a riffle-run-pool sequence) is a more efficient sampling strategy to estimate reach-scale assemblage attributes than less intensive sampling over larger spatial scales. This intensive sampling protocol was sufficiently sensitive that relatively small differences in assemblage attributes (<20%) could be detected with a high statistical power (1-β > 0.95) and that relatively few stream reaches (<4) need be sampled to accurately estimate assemblage attributes close to the true population means. The merits and potential drawbacks of the intensive sampling strategy are discussed, and it is deemed to be suitable for a range of monitoring and bioassessment objectives.
Resumo:
We present new, simple, efficient data structures for approximate reconciliation of set differences, a useful standalone primitive for peer-to-peer networks and a natural subroutine in methods for exact reconciliation. In the approximate reconciliation problem, peers A and B respectively have subsets of elements SA and SB of a large universe U. Peer A wishes to send a short message M to peer B with the goal that B should use M to determine as many elements in the set SB–SA as possible. To avoid the expense of round trip communication times, we focus on the situation where a single message M is sent. We motivate the performance tradeoffs between message size, accuracy and computation time for this problem with a straightforward approach using Bloom filters. We then introduce approximation reconciliation trees, a more computationally efficient solution that combines techniques from Patricia tries, Merkle trees, and Bloom filters. We present an analysis of approximation reconciliation trees and provide experimental results comparing the various methods proposed for approximate reconciliation.
Resumo:
Aim: To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. Methods: The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. Results: A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Conclusion: Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.
Resumo:
PURPOSE: To investigate the accuracy of 1.0T Magnetic Resonance Imaging (MRI) to measure the ventricular size in experimental hydrocephalus in pup rats. METHODS: Wistar rats were subjected to hydrocephalus by intracisternal injection of 20% kaolin (n=13). Ten rats remained uninjected to be used as controls. At the endpoint of experiment animals were submitted to MRI of brain and killed. The ventricular size was assessed using three measures: ventricular ratio (VR), the cortical thickness (Cx) and the ventricles area (VA), performed on photographs of anatomical sections and MRI. RESULTS: The images obtained through MR present enough quality to show the lateral ventricular cavities but not to demonstrate the difference between the cortex and the white matter, as well as the details of the deep structures of the brain. There were no statistically differences between the measures on anatomical sections and MRI of VR and Cx (p=0.9946 and p=0.5992, respectively). There was difference between VA measured on anatomical sections and MRI (p<0.0001). CONCLUSION: The parameters obtained through 1.0T MRI were sufficient in quality to individualize the ventricular cavities and the cerebral cortex, and to calculate the ventricular ratio in hydrocephalus rats when compared to their respective anatomic slice.
Resumo:
Objective: This ex vivo study evaluated the effect of pre-flaring and file size on the accuracy of the Root ZX and Novapex electronic apex locators (EALs). Material and methods: The actual working length (WL) was set 1 mm short of the apical foramen in the palatal root canals of 24 extracted maxillary molars. The teeth were embedded in an alginate mold, and two examiners performed the electronic measurements using #10, #15, and #20 K-files. The files were inserted into the root canals until the "0.0" or "APEX" signals were observed on the LED or display screens for the Novapex and Root ZX, respectively, retracting to the 1.0 mark. The measurements were repeated after the pre-flaring using the S1 and SX Pro-Taper instruments. Two measurements were performed for each condition and the means were used. Intra-class correlation coefficients (ICCs) were calculated to verify the intra-and inter-examiner agreement. The mean differences between the WL and electronic length values were analyzed by the three-way ANOVA test (p<0.05). Results: ICCs were high (>0.8) and the results demonstrated a similar accuracy for both EALs (p>0.05). Statistically significant accurate measurements were verified in the pre-flared canals, except for the Novapex using a #20 K-file. Conclusions: The tested EALs showed acceptable accuracy, whereas the pre-flaring procedure revealed a more significant effect than the used file size.
Resumo:
Purpose Arbitrary numbers of corneal confocal microscopy images have been used for analysis of corneal subbasal nerve parameters under the implicit assumption that these are a representative sample of the central corneal nerve plexus. The purpose of this study is to present a technique for quantifying the number of random central corneal images required to achieve an acceptable level of accuracy in the measurement of corneal nerve fiber length and branch density. Methods Every possible combination of 2 to 16 images (where 16 was deemed the true mean) of the central corneal subbasal nerve plexus, not overlapping by more than 20%, were assessed for nerve fiber length and branch density in 20 subjects with type 2 diabetes and varying degrees of functional nerve deficit. Mean ratios were calculated to allow comparisons between and within subjects. Results In assessing nerve branch density, eight randomly chosen images not overlapping by more than 20% produced an average that was within 30% of the true mean 95% of the time. A similar sampling strategy of five images was 13% within the true mean 80% of the time for corneal nerve fiber length. Conclusions The “sample combination analysis” presented here can be used to determine the sample size required for a desired level of accuracy of quantification of corneal subbasal nerve parameters. This technique may have applications in other biological sampling studies.
Resumo:
A fundamental proposition is that the accuracy of the designer's tender price forecasts is positively correlated with the amount of information available for that project. The paper describes an empirical study of the effects of the quantity of information available on practicing Quantity Surveyors' forecasting accuracy. The methodology involved the surveyors repeatedly revising tender price forecasts on receipt of chunks of project information. Each of twelve surveyors undertook two projects and selected information chunks from a total of sixteen information types. The analysis indicated marked differences in accuracy between different project types and experts/non-experts. The expert surveyors' forecasts were not found to be significantly improved by information other than that of basic building type and size, even after eliminating project type effects. The expert surveyors' forecasts based on the knowledge of building type and size alone were, however, found to be of similar accuracy to that of average practitioners pricing full bills of quantities.
Resumo:
This study used automated data processing techniques to calculate a set of novel treatment plan accuracy metrics, and investigate their usefulness as predictors of quality assurance (QA) success and failure. 151 beams from 23 prostate and cranial IMRT treatment plans were used in this study. These plans had been evaluated before treatment using measurements with a diode array system. The TADA software suite was adapted to allow automatic batch calculation of several proposed plan accuracy metrics, including mean field area, small-aperture, off-axis and closed-leaf factors. All of these results were compared the gamma pass rates from the QA measurements and correlations were investigated. The mean field area factor provided a threshold field size (5 cm2, equivalent to a 2.2 x 2.2 cm2 square field), below which all beams failed the QA tests. The small aperture score provided a useful predictor of plan failure, when averaged over all beams, despite being weakly correlated with gamma pass rates for individual beams. By contrast, the closed leaf and off-axis factors provided information about the geometric arrangement of the beam segments but were not useful for distinguishing between plans that passed and failed QA. This study has provided some simple tests for plan accuracy, which may help minimise time spent on QA assessments of treatments that are unlikely to pass.
Resumo:
Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods which have been recently employed to analyse PNSD data, however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K-means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and Silhouette width validation values and the K-means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K-means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectra to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help researchers immensely in analysing PNSD data for characterisation and source apportionment purposes.
Resumo:
Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were "rare" in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the "rare" species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the "abundant" species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.
Resumo:
In order to predict the current state and future development of Earth s climate, detailed information on atmospheric aerosols and aerosol-cloud-interactions is required. Furthermore, these interactions need to be expressed in such a way that they can be represented in large-scale climate models. The largest uncertainties in the estimate of radiative forcing on the present day climate are related to the direct and indirect effects of aerosol. In this work aerosol properties were studied at Pallas and Utö in Finland, and at Mount Waliguan in Western China. Approximately two years of data from each site were analyzed. In addition to this, data from two intensive measurement campaigns at Pallas were used. The measurements at Mount Waliguan were the first long term aerosol particle number concentration and size distribution measurements conducted in this region. They revealed that the number concentration of aerosol particles at Mount Waliguan were much higher than those measured at similar altitudes in other parts of the world. The particles were concentrated in the Aitken size range indicating that they were produced within a couple of days prior to reaching the site, rather than being transported over thousands of kilometers. Aerosol partitioning between cloud droplets and cloud interstitial particles was studied at Pallas during the two measurement campaigns, First Pallas Cloud Experiment (First PaCE) and Second Pallas Cloud Experiment (Second PaCE). The method of using two differential mobility particle sizers (DMPS) to calculate the number concentration of activated particles was found to agree well with direct measurements of cloud droplet. Several parameters important in cloud droplet activation were found to depend strongly on the air mass history. The effects of these parameters partially cancelled out each other. Aerosol number-to-volume concentration ratio was studied at all three sites using data sets with long time-series. The ratio was found to vary more than in earlier studies, but less than either aerosol particle number concentration or volume concentration alone. Both air mass dependency and seasonal pattern were found at Pallas and Utö, but only seasonal pattern at Mount Waliguan. The number-to-volume concentration ratio was found to follow the seasonal temperature pattern well at all three sites. A new parameterization for partitioning between cloud droplets and cloud interstitial particles was developed. The parameterization uses aerosol particle number-to-volume concentration ratio and aerosol particle volume concentration as the only information on the aerosol number and size distribution. The new parameterization is computationally more efficient than the more detailed parameterizations currently in use, but the accuracy of the new parameterization was slightly lower. The new parameterization was also compared to directly observed cloud droplet number concentration data, and a good agreement was found.
Resumo:
We have compared the total as well as fine mode aerosol optical depth (tau and tau(fine)) retrieved by Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Terra and Aqua (2001-2005) with the equivalent parameters derived by Aerosol Robotic Network (AERONET) at Kanpur (26.45 degrees N, 80.35 degrees E), northern India. MODIS Collection 005 (C005)-derived tau(0.55) was found to be in good agreement with the AERONET measurements. The tau(fine) and eta (tau(fine)/tau) were, however, biased low significantly in most matched cases. A new set of retrieval with the use of absorbing aerosol model (SSA similar to 0.87) with increased visible surface reflectance provided improved tau and tau(fine) at Kanpur. The new derivation of eta also compares well qualitatively with an independent set of in situ measurements of accumulation mass fraction over much of the southern India. This suggests that though MODIS land algorithm has limited information to derive size properties of aerosols over land, more accurate parameterization of aerosol and surface properties within the existing C005 algorithm may improve the accuracy of size-resolved aerosol optical properties. The results presented in this paper indicate that there is a need to reconsider the surface parameterization and assumed aerosol properties in MODIS C005 algorithm over the Indian region in order to retrieve more accurate aerosol optical and size properties, which are essential to quantify the impact of human-made aerosols on climate.
Resumo:
In this paper, size dependent linear free flexural vibration behavior of functionally graded (FG) nanoplates are investigated using the iso-geometric based finite element method. The field variables are approximated by non-uniform rational B-splines. The nonlocal constitutive relation is based on Eringen's differential form of nonlocal elasticity theory. The material properties are assumed to vary only in the thickness direction and the effective properties for the FG plate are computed using Mori-Tanaka homogenization scheme. The accuracy of the present formulation is demonstrated considering the problems for which solutions are available. A detailed numerical study is carried out to examine the effect of material gradient index, the characteristic internal length, the plate thickness, the plate aspect ratio and the boundary conditions on the global response of the FG nanoplate. From the detailed numerical study it is seen that the fundamental frequency decreases with increasing gradient index and characteristic internal length. (c) 2012 Elsevier B.V. All rights reserved.