909 resultados para Data accuracy
Resumo:
Purpose: The aim of this study was to evaluate the influence of artificial accelerated aging on dimensional stability of two types of acrylic resins (thermally and chemically activated) submitted to different protocols of storage. Materials and Methods: One hundred specimens were made using a Teflon matrix (1.5cmx0.5mm) with four imprint marks, following the lost-wax casting method. The specimens were divided into ten groups, according to the type of acrylic resin, aging procedure, and storage protocol (30 days). GI: acrylic resins thermally activated, aging, storage in artificial saliva for 16 hours, distilled water for 8 hours; GII: thermal, aging, artificial saliva for 16 hours, dry for 8 hours; GIII: thermal, no aging, artificial saliva for 16 hours, distilled water for 8 hours, GIV: thermal, no aging, artificial saliva for 16 hours, dry for 8 hours; GV: acrylic resins chemically activated, aging, artificial saliva for 16 hours, distilled water for 8 hours; GVI: chemical, aging, artificial saliva for 16 hours, dry for 8 hours; GVII: chemical, no aging, artificial saliva for 16 hours, distilled water for 8 hours; GVIII: chemical, no aging, artificial saliva for 16 hours, dry for 8 hours GIX: thermal, dry for 24 hours; and GX: chemical, dry for 24 hours. All specimens were photographed before and after treatment, and the images were evaluated by software (UTHSCSA-Image Tool) that made distance measurements between the marks in the specimens (mm), calculating the dimensional stability. Data were submitted to statistical analysis (two-way ANOVA, Tukey test, p = 0.05). Results: Statistical analysis showed that the specimens submitted to storage in water presented the largest distance between both axes (major and minor), statistically different (p < 0.05) from control groups. Conclusions: All acrylic resins presented dimensional changes, and the artificial accelerated aging and storage period influenced these alterations.
Resumo:
In order to understand the determinants of schistosome-related hepato- and spleno-megaly better, 14 002 subjects aged 3-60 years (59% male; mean age =32 years) were randomly selected from 43 villages, all in Hunan province, China, where schistosomiasis caused by Schistosoma japonicum is endemic. The abdomen of each subject was examined along the mid-sternal (MSL) and mid-clavicular lines, for evidence of current hepato- and/or spleno-megaly, and a questionnaire was used to collect information on the medical history of each individual. Current infections with S. japonicum were detected by stool examination. Almost all (99.8%) of the subjects were ethnically Han by descent and most (77%) were engaged in farming. Although schistosomiasis appeared common (42% of the subjects claiming to have had the disease), only 45% of the subjects said they had received anti-schistosomiasis drugs. Overall, 1982 (14%) of the subjects had S. japonicum infections (as revealed by miracidium-hatching tests and/or Katon Katz smears) when examined and 22% had palpable hepatomegaly (i.e. enlargement of at least 3 cm along the MSL), although only 2.5% had any form of detectable splenomegaly (i.e. a Hackett's grade of at least 1). Multiple logistic regression revealed that male subjects, fishermen, farmers, subjects aged greater than or equal to 25 years, subjects with a history of schistosomiasis, and subjects who had had bloody stools in the previous 2 weeks were all at relatively high risk of hepato- and/or spleno-megaly. In areas moderately endemic for Schistosoma japonicum, occupational exposure and disease history appear to be good predictors of current disease status among older residents. These results reconfirm those reported earlier in the same region.
Resumo:
The Eysenck Personality Questionnaire-Revised (EPQ-R), the Eysenck Personality Profiler Short Version (EPP-S), and the Big Five Inventory (BFI-V4a) were administered to 135 postgraduate students of business in Pakistan. Whilst Extraversion and Neuroticism scales from the three questionnaires were highly correlated, it was found that Agreeableness was most highly correlated with Psychoticism in the EPQ-R and Conscientiousness was most highly correlated with Psychoticism in the EPP-S. Principal component analyses with varimax rotation were carried out. The analyses generally suggested that the five factor model rather than the three-factor model was more robust and better for interpretation of all the higher order scales of the EPQ-R, EPP-S, and BFI-V4a in the Pakistani data. Results show that the superiority of the five factor solution results from the inclusion of a broader variety of personality scales in the input data, whereas Eysenck's three factor solution seems to be best when a less complete but possibly more important set of variables are input. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Purpose: For ultra-endurance athletes, whose energy expenditure is likely to be at the extremes of human tolerance for sustained periods of time, there is increased concern regarding meeting energy needs. Due to the lack of data outlining the energy requirements of such athletes, it is possible that those participating in ultra-endurance exercise are compromising performance, as well as health, as a result of inadequate nutrition and energy intake. To provide insight into this dilemma, we have presented a case study of a 37-yr-old ultra-marathon runner as he runs around the coast of Australia. Methods: Total energy expenditure was measured over a 2-wk period using the doubly labeled water technique. Results: The average total energy expenditure of the case subject was 6321 kcal.d(-1). Based on the expected accuracy and precision of the doubly labeled water technique the subject's total energy expenditure might range between 6095 and 6550 kcal.d(-1). The subject's average daily water turnover was 6.083 L over the 14-d period and might range between 5.9 L and 6.3 L.d(-1). Conclusions: This information will provide a guide to the energy requirements of ultra-endurance running and enable athletes, nutritionists, and coaches to optimize performance without compromising the health of the participant.
Resumo:
Seven hundred and nineteen samples from throughout the Cainozoic section in CRP-3 were analysed by a Malvern Mastersizer laser particle analyser, in order to derive a stratigraphic distribution of grain-size parameters downhole. Entropy analysis of these data (using the method of Woolfe and Michibayashi, 1995) allowed recognition of four groups of samples, each group characterised by a distinctive grain-size distribution. Group 1, which shows a multi-modal distribution, corresponds to mudrocks, interbedded mudrock/sandstone facies, muddy sandstones and diamictites. Group 2, with a sand-grade mode but showing wide dispersion of particle size, corresponds to muddy sandstones, a few cleaner sandstones and some conglomerates. Group 3 and Group 4 are also sand-dominated, with better grain-size sorting, and correspond to clean, well-washed sandstones of varying mean grain-size (medium and fine modes, respectively). The downhole disappearance of Group 1, and dominance of Groups 3 and 4 reflect a concomitant change from mudrock- and diamictite-rich lithology to a section dominated by clean, well-washed sandstones with minor conglomerates. Progressive downhole increases in percentage sand and principal mode also reflect these changes. Significant shifts in grain-size parameters and entropy group membership were noted across sequence boundaries and seismic reflectors, as recognised in others studies.
Resumo:
Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.
Resumo:
Using NONMEM, the population pharmacokinetics of perhexiline were studied in 88 patients (34 F, 54 M) who were being treated for refractory angina. Their mean +/- SD (range) age was 75 +/- 9.9 years (46-92), and the length of perhexiline treatment was 56 +/- 77 weeks (0.3-416). The sampling time after a dose was 14.1 +/- 21.4 hours (0.5-200), and the perhexiline plasma concentrations were 0.39 +/- 0.32 mg/L (0.03-1.56). A one-compartment model with first-order absorption was fitted to the data using the first-order (FO) approximation. The best model contained 2 subpopulations (obtained via the $MIXTURE subroutine) of 77 subjects (subgroup A) and 11 subjects (subgroup B) that had typical values for clearance (CL/F) of 21.8 L/h and 2.06 L/h, respectively. The volumes of distribution (V/F) were 1470 L and 260 L, respectively, which suggested a reduction in presystemic metabolism in subgroup B. The interindividual variability (CV%) was modeled logarithmically and for CL/F ranged from 69.1% (subgroup A) to 86.3% (subgroup B). The interindividual variability in V/F was 111%. The residual variability unexplained by the population model was 28.2%. These results confirm and extend the existing pharmacokinetic data on perhexiline, especially the bimodal distribution of CL/F manifested via an inherited deficiency in hepatic and extrahepatic CYP2D6 activity.
Resumo:
When the data consist of certain attributes measured on the same set of items in different situations, they would be described as a three-mode three-way array. A mixture likelihood approach can be implemented to cluster the items (i.e., one of the modes) on the basis of both of the other modes simultaneously (i.e,, the attributes measured in different situations). In this paper, it is shown that this approach can be extended to handle three-mode three-way arrays where some of the data values are missing at random in the sense of Little and Rubin (1987). The methodology is illustrated by clustering the genotypes in a three-way soybean data set where various attributes were measured on genotypes grown in several environments.
Resumo:
Regional planners, policy makers and policing agencies all recognize the importance of better understanding the dynamics of crime. Theoretical and application-oriented approaches which provide insights into why and where crimes take place are much sought after. Geographic information systems and spatial analysis techniques, in particular, are proving to be essential or studying criminal activity. However, the capabilities of these quantitative methods continue to evolve. This paper explores the use of geographic information systems and spatial analysis approaches for examining crime occurrence in Brisbane, Australia. The analysis highlights novel capabilities for the analysis of crime in urban regions.
Resumo:
Objectives: (1) To establish test performance measures for Transient Evoked Otoacoustic Emission testing of 6-year-old children in a school setting; (2) To investigate whether Transient Evoked Otoacoustic Emission testing provides a more accurate and effective alternative to a pure tone screening plus tympanometry protocol. Methods: Pure tone screening, tympanometry and transient evoked otoacoustic emission data were collected from 940 subjects (1880 ears), with a mean age of 6.2 years. Subjects were tested in non-sound-treated rooms within 22 schools. Receiver operating characteristics curves along with specificity, sensitivity, accuracy and efficiency values were determined for a variety of transient evoked otoacoustic emission/pure tone screening/tympanometry comparisons. Results: The Transient Evoked Otoacoustic Emission failure rate for the group was 20.3%. The failure rate for pure tone screening was found to be 8.9%, whilst 18.6% of subjects failed a protocol consisting of combined pure tone screening and tympanometry results. In essence, findings from the comparison of overall Transient Evoked Otoacoustic Emission pass/fail with overall pure tone screening pass/fail suggested that use of a modified Rhode Island Hearing Assessment Project criterion would result in a very high probability that a child with a pass result has normal hearing (true negative). However, the hit rate was only moderate. Selection of a signal-to-noise ratio (SNR) criterion set at greater than or equal to 1 dB appeared to provide the best test performance measures for the range of SNR values investigated. Test performance measures generally declined when tympanometry results were included, with the exception of lower false alarm rates and higher positive predictive values. The exclusion of low frequency data from the Transient Evoked Otoacoustic Emission SNR versus pure tone screening analysis resulted in improved performance measures. Conclusions: The present study poses several implications for the clinical implementation of Transient Evoked Otoacoustic Emission screening for entry level school children. Transient Evoked Otoacoustic Emission pass/fail criteria will require revision. The findings of the current investigation offer support to the possible replacement of pure tone screening with Transient Evoked Otoacoustic Emission testing for 6-year-old children. However, they do not suggest the replacement of the pure tone screening plus tympanometry battery. (C) 2001 Elsevier Science Ireland Ltd. All rights reserved.