32 resultados para Forecasting accuracy
em Biblioteca Digital da Produ
Resumo:
Several impression materials are available in the Brazilian marketplace to be used in oral rehabilitation. The aim of this study was to compare the accuracy of different impression materials used for fixed partial dentures following the manufacturers' instructions. A master model representing a partially edentulous mandibular right hemi-arch segment whose teeth were prepared to receive full crowns was used. Custom trays were prepared with auto-polymerizing acrylic resin and impressions were performed with a dental surveyor, standardizing the path of insertion and removal of the tray. Alginate and elastomeric materials were used and stone casts were obtained after the impressions. For the silicones, impression techniques were also compared. To determine the impression materials' accuracy, digital photographs of the master model and of the stone casts were taken and the discrepancies between them were measured. The data were subjected to analysis of variance and Duncan's complementary test. Polyether and addition silicone following the single-phase technique were statistically different from alginate, condensation silicone and addition silicone following the double-mix technique (p < .05), presenting smaller discrepancies. However, condensation silicone was similar (p > .05) to alginate and addition silicone following the double-mix technique, but different from polysulfide. The results led to the conclusion that different impression materials and techniques influenced the stone casts' accuracy in a way that polyether, polysulfide and addition silicone following the single-phase technique were more accurate than the other materials.
Resumo:
The present study compared the accuracy of three electronic apex locators (EALs) - Elements Diagnostic®, Root ZX® and Apex DSP® - in the presence of different irrigating solutions (0.9% saline solution and 1% sodium hypochlorite). The electronic measurements were carried out by three examiners, using twenty extracted human permanent maxillary central incisors. A size 10 K file was introduced into the root canals until reaching the 0.0 mark, and was subsequently retracted to the 1.0 mark. The gold standard (GS) measurement was obtained by combining visual and radiographic methods, and was set 1 mm short of the apical foramen. Electronic length values closer to the GS (± 0.5 mm) were considered as accurate measures. Intraclass correlation coefficients (ICCs) were used to verify inter-examiner agreement. The comparison among the EALs was performed using the McNemar and Kruskal-Wallis tests (p < 0.05). The ICCs were generally high, ranging from 0.8859 to 0.9657. Similar results were observed for the percentage of electronic measurements closer to the GS obtained with the Elements Diagnostic® and the Root ZX® EALs (p > 0.05), independent of the irrigating solutions used. The measurements taken with these two EALs were more accurate than those taken with Apex DSP®, regardless of the irrigating solution used (p < 0.05). It was concluded that Elements Diagnostic® and Root ZX® apex locators are able to locate the cementum-dentine junction more precisely than Apex DSP®. The presence of irrigating solutions does not interfere with the performance of the EALs.
Resumo:
We explored possible effects of negative covariation among finger forces in multifinger accurate force production tasks on the classical Fitts's speed-accuracy trade-off. Healthy subjects performed cyclic force changes between pairs of targets ""as quickly and accurately as possible."" Tasks with two force amplitudes and six ratics of force amplitude to target size were performed by each of the four fingers of the right hand and four finger combinations. There was a close to linear relation between movement time and the log-transformed ratio of target amplitude to target size across all finger combinations. There was a close to linear relation between standard deviation of force amplitude and movement time. There were no differences between the performance of either of the two ""radial"" fingers (index and middle) and the multifinger tasks. The ""ulnar"" fingers (little and ring) showed higher indices of variability and longer movement times as compared with both ""radial"" fingers and multifinger combinations. We conclude that potential effects of the negative covariation and also of the task-sharing across a set of fingers are counterbalanced by an increase in individual finger force variability in multifinger tasks as compared with single-finger tasks. The results speak in favor of a feed-forward model of multifinger synergies. They corroborate a hypothesis that multifinger synergies are created not to improve overall accuracy, but to allow the system larger flexibility, for example to deal with unexpected perturbations and concomitant tasks.
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
Single interface flow systems (SIFA) present some noteworthy advantages when compared to other flow systems, such as a simpler configuration, a more straightforward operation and control and an undemanding optimisation routine. Moreover, the plain reaction zone establishment, which relies strictly on the mutual inter-dispersion of the adjoining solutions, could be exploited to set up multiple sequential reaction schemes providing supplementary information regarding the species under determination. In this context, strategies for accuracy assessment could be favourably implemented. To this end, the sample could be processed by two quasi-independent analytical methods and the final result would be calculated after considering the two different methods. Intrinsically more precise and accurate results would be then gathered. In order to demonstrate the feasibility of the approach, a SIFA system with spectrophotometric detection was designed for the determination of lansoprazole in pharmaceutical formulations. Two reaction interfaces with two distinct pi-acceptors, chloranilic acid (CIA) and 2,3-dichloro-5,6-dicyano-p-benzoquinone (DDQ) were implemented. Linear working concentration ranges between 2.71 x 10(-4) to 8.12 x 10(-4) mol L(-1) and 2.17 x 10(-4) to 8.12 x 10(-4) mol L(-1) were obtained for DDQ and CIA methods, respectively. When compared with the results furnished by the reference procedure, the results showed relative deviations lower than 2.7%. Furthermore. the repeatability was good, with r.s.d. lower than 3.8% and 4.7% for DDQ and CIA methods, respectively. Determination rate was about 30 h(-1). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Background: Although the Clock Drawing Test (CDT) is the second most used test in the world for the screening of dementia, there is still debate over its sensitivity specificity, application and interpretation in dementia diagnosis. This study has three main aims: to evaluate the sensitivity and specificity of the CDT in a sample composed of older adults with Alzheimer`s disease (AD) and normal controls; to compare CDT accuracy to the that of the Mini-mental State Examination (MMSE) and the Cambridge Cognitive Examination (CAMCOG), and to test whether the association of the MMSE with the CDT leads to higher or comparable accuracy as that reported for the CAMCOG. Methods: Cross-sectional assessment was carried out for 121 AD and 99 elderly controls with heterogeneous educational levels from a geriatric outpatient clinic who completed the Cambridge Examination for Mental Disorder of the Elderly (CAMDEX). The CDT was evaluated according to the Shulman, Mendez and Sunderland scales. Results: The CDT showed high sensitivity and specificity. There were significant correlations between the CDT and the MMSE (0.700-0.730; p < 0.001) and between the CDT and the CAMCOG (0.753-0.779; p < 0.001). The combination of the CDT with the MMSE improved sensitivity and specificity (SE = 89.2-90%; SP = 71.7-79.8%). Subgroup analysis indicated that for elderly people with lower education, sensitivity and specificity were both adequate and high. Conclusions: The CDT is a robust screening test when compared with the MMSE or the CAMCOG, independent of the scale used for its interpretation. The combination with the MMSE improves its performance significantly, becoming equivalent to the CAMCOG.
Resumo:
The ""Short Cognitive Performance Test"" (Syndrom Kurztest, SKT) is a cognitive screening battery designed to detect memory and attention deficits. The aim of this study was to evaluate the diagnostic accuracy of the SKT as a screening tool for mild cognitive impairment (MCI) and dementia. A total of 46 patients with Alzheimer`s disease (AD), 82 with MCI, and 56 healthy controls were included in the study. Patients and controls were allocated into two groups according to educational level (< 8 years or > 8 years). ROC analyses suggested that the SKT adequately discriminates AD from non-demented subjects (MCI and controls), irrespective of the education group. The test had good sensitivity to discriminate MCI from unimpaired controls in the sub-sample of individuals with more than 8 years of schooling. Our findings suggest that the SKT is a good screening test for cognitive impairment and dementia. However, test results must be interpreted with caution when administered to less-educated individuals.
Resumo:
A technique for improving the performance of an OSNR monitor based on a polarisation nulling method with the downhill simplex algorithm is demonstrated. Establishing a compromise between accuracy and acquisition time, the monitor has been calibrated to 0.72 dB/390 ms and 0.98 dB/320 ms, over a range of nearly 21 dB. As far as is known, these are the best values achieved with such an OSNR monitoring method.
Resumo:
Literature presents a huge number of different simulations of gas-solid flows in risers applying two-fluid modeling. In spite of that, the related quantitative accuracy issue remains mostly untouched. This state of affairs seems to be mainly a consequence of modeling shortcomings, notably regarding the lack of realistic closures. In this article predictions from a two-fluid model are compared to other published two-fluid model predictions applying the same Closures, and to experimental data. A particular matter of concern is whether the predictions are generated or not inside the statistical steady state regime that characterizes the riser flows. The present simulation was performed inside the statistical steady state regime. Time-averaged results are presented for different time-averaging intervals of 5, 10, 15 and 20 s inside the statistical steady state regime. The independence of the averaged results regarding the time-averaging interval is addressed and the results averaged over the intervals of 10 and 20 s are compared to both experiment and other two-fluid predictions. It is concluded that the two-fluid model used is still very crude, and cannot provide quantitative accurate results, at least for the particular case that was considered. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
A thermodynamic approach to predict bulk glass-forming compositions in binary metallic systems was recently proposed. In this approach. the parameter gamma* = Delta H-amor/(Delta H-inter - Delta H-amor) indicates the glass-forming ability (GFA) from the standpoint of the driving force to form different competing phases, and Delta H-amor and Delta H-inter are the enthalpies for-lass and intermetallic formation, respectively. Good glass-forming compositions should have a large negative enthalpy for glass formation and a very small difference for intermetallic formation, thus making the glassy phase easily reachable even under low cooling rates. The gamma* parameter showed a good correlation with GFA experimental data in the Ni-Nb binary system. In this work, a simple extension of the gamma* parameter is applied in the ternary Al-Ni-Y system. The calculated gamma* isocontours in the ternary diagram are compared with experimental results of glass formation in that system. Despite sonic misfitting, the best glass formers are found quite close to the highest gamma* values, leading to the conclusion that this thermodynamic approach can lie extended to ternary systems, serving as a useful tool for the development of new glass-forming compositions. Finally the thermodynamic approach is compared with the topological instability criteria used to predict the thermal behavior of glassy Al alloys. (C) 2007 Elsevier B. V. All rights reserved.
Resumo:
In this paper, a comparative analysis of the long-term electric power forecasting methodologies used in some South American countries, is presented. The purpose of this study is to compare and observe if such methodologies have some similarities, and also examine the behavior of the results when they are applied to the Brazilian electric market. The abovementioned power forecasts were performed regarding the main four consumption classes (residential, industrial, commercial and rural) which are responsible for approximately 90% of the national consumption. The tool used in this analysis was the SAS (c) program. The outcome of this study allowed identifying various methodological similarities, mainly those related to the econometric variables used by these methods. This fact strongly conditioned the comparative results obtained.
Resumo:
There are several ways to attempt to model a building and its heat gains from external sources as well as internal ones in order to evaluate a proper operation, audit retrofit actions, and forecast energy consumption. Different techniques, varying from simple regression to models that are based on physical principles, can be used for simulation. A frequent hypothesis for all these models is that the input variables should be based on realistic data when they are available, otherwise the evaluation of energy consumption might be highly under or over estimated. In this paper, a comparison is made between a simple model based on artificial neural network (ANN) and a model that is based on physical principles (EnergyPlus) as an auditing and predicting tool in order to forecast building energy consumption. The Administration Building of the University of Sao Paulo is used as a case study. The building energy consumption profiles are collected as well as the campus meteorological data. Results show that both models are suitable for energy consumption forecast. Additionally, a parametric analysis is carried out for the considered building on EnergyPlus in order to evaluate the influence of several parameters such as the building profile occupation and weather data on such forecasting. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Accurate price forecasting for agricultural commodities can have significant decision-making implications for suppliers, especially those of biofuels, where the agriculture and energy sectors intersect. Environmental pressures and high oil prices affect demand for biofuels and have reignited the discussion about effects on food prices. Suppliers in the sugar-alcohol sector need to decide the ideal proportion of ethanol and sugar to optimise their financial strategy. Prices can be affected by exogenous factors, such as exchange rates and interest rates, as well as non-observable variables like the convenience yield, which is related to supply shortages. The literature generally uses two approaches: artificial neural networks (ANNs), which are recognised as being in the forefront of exogenous-variable analysis, and stochastic models such as the Kalman filter, which is able to account for non-observable variables. This article proposes a hybrid model for forecasting the prices of agricultural commodities that is built upon both approaches and is applied to forecast the price of sugar. The Kalman filter considers the structure of the stochastic process that describes the evolution of prices. Neural networks allow variables that can impact asset prices in an indirect, nonlinear way, what cannot be incorporated easily into traditional econometric models.
Resumo:
This article analysed scenarios for Brazilian consumption of ethanol for the period 2006 to 2012. The results show that if the country`s GDP sustains a 4.6% a year growth, domestic consumption of fuel ethanol could increase to 25.16 billion liters in this period, which is a volume relatively close to the forecasted gasoline consumption of 31 billion liters. At a lower GDP growth of 1.22% a year, gasoline consumption would be reduced and domestic ethanol consumption in Brazil would be no higher than 18.32 billion liters. Contrary to the current situation, forecasts indicated that hydrated ethanol consumption could become much higher than anhydrous consumption in Brazil. The former is being consumed in cars moved exclusively by ethanol and flex-fuel cars, successfully introduced in the country at 2003. Flex cars allow Brazilian consumers to choose between gasoline and hydrated ethanol and immediately switch to whichever fuel presents the most favourable relative price.
Resumo:
Purpose: To evaluate the influence of cross-sectional arc calcification on the diagnostic accuracy of computed tomography (CT) angiography compared with conventional coronary angiography for the detection of obstructive coronary artery disease (CAD). Materials and Methods: Institutional Review Board approval and written informed consent were obtained from all centers and participants for this HIPAA-compliant study. Overall, 4511 segments from 371 symptomatic patients (279 men, 92 women; median age, 61 years [interquartile range, 53-67 years]) with clinical suspicion of CAD from the CORE-64 multi-center study were included in the analysis. Two independent blinded observers evaluated the percentage of diameter stenosis and the circumferential extent of calcium (arc calcium). The accuracy of quantitative multidetector CT angiography to depict substantial (>50%) stenoses was assessed by using quantitative coronary angiography (QCA). Cross-sectional arc calcium was rated on a segment level as follows: noncalcified or mild (<90 degrees), moderate (90 degrees-180 degrees), or severe (>180 degrees) calcification. Univariable and multivariable logistic regression, receiver operation characteristic curve, and clustering methods were used for statistical analyses. Results: A total of 1099 segments had mild calcification, 503 had moderate calcification, 338 had severe calcification, and 2571 segments were noncalcified. Calcified segments were highly associated (P < .001) with disagreement between CTA and QCA in multivariable analysis after controlling for sex, age, heart rate, and image quality. The prevalence of CAD was 5.4% in noncalcified segments, 15.0% in mildly calcified segments, 27.0% in moderately calcified segments, and 43.0% in severely calcified segments. A significant difference was found in area under the receiver operating characteristic curves (noncalcified: 0.86, mildly calcified: 0.85, moderately calcified: 0.82, severely calcified: 0.81; P < .05). Conclusion: In a symptomatic patient population, segment-based coronary artery calcification significantly decreased agreement between multidetector CT angiography and QCA to detect a coronary stenosis of at least 50%.