161 resultados para Data Accuracy
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
We present a new set of oscillator strengths for 142 Fe II lines in the wavelength range 4000-8000 angstrom. Our gf-values are both accurate and precise, because each multiplet was globally normalized using laboratory data ( accuracy), while the relative gf-values of individual lines within a given multiplet were obtained from theoretical calculations ( precision). Our line list was tested with the Sun and high-resolution (R approximate to 10(5)), high-S/N (approximate to 700-900) Keck+HIRES spectra of the metal-poor stars HD 148816 and HD 140283, for which line-to-line scatter (sigma) in the iron abundances from Fe II lines as low as 0.03, 0.04, and 0.05 dex are found, respectively. For these three stars the standard error in the mean iron abundance from Fe II lines is negligible (sigma(mean) <= 0.01 dex). The mean solar iron abundance obtained using our gf-values and different model atmospheres is A(Fe) = 7.45(sigma = 0.02).
Resumo:
Several impression materials are available in the Brazilian marketplace to be used in oral rehabilitation. The aim of this study was to compare the accuracy of different impression materials used for fixed partial dentures following the manufacturers' instructions. A master model representing a partially edentulous mandibular right hemi-arch segment whose teeth were prepared to receive full crowns was used. Custom trays were prepared with auto-polymerizing acrylic resin and impressions were performed with a dental surveyor, standardizing the path of insertion and removal of the tray. Alginate and elastomeric materials were used and stone casts were obtained after the impressions. For the silicones, impression techniques were also compared. To determine the impression materials' accuracy, digital photographs of the master model and of the stone casts were taken and the discrepancies between them were measured. The data were subjected to analysis of variance and Duncan's complementary test. Polyether and addition silicone following the single-phase technique were statistically different from alginate, condensation silicone and addition silicone following the double-mix technique (p < .05), presenting smaller discrepancies. However, condensation silicone was similar (p > .05) to alginate and addition silicone following the double-mix technique, but different from polysulfide. The results led to the conclusion that different impression materials and techniques influenced the stone casts' accuracy in a way that polyether, polysulfide and addition silicone following the single-phase technique were more accurate than the other materials.
Resumo:
Due to the imprecise nature of biological experiments, biological data is often characterized by the presence of redundant and noisy data. This may be due to errors that occurred during data collection, such as contaminations in laboratorial samples. It is the case of gene expression data, where the equipments and tools currently used frequently produce noisy biological data. Machine Learning algorithms have been successfully used in gene expression data analysis. Although many Machine Learning algorithms can deal with noise, detecting and removing noisy instances from the training data set can help the induction of the target hypothesis. This paper evaluates the use of distance-based pre-processing techniques for noise detection in gene expression data classification problems. This evaluation analyzes the effectiveness of the techniques investigated in removing noisy data, measured by the accuracy obtained by different Machine Learning classifiers over the pre-processed data.
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
Literature presents a huge number of different simulations of gas-solid flows in risers applying two-fluid modeling. In spite of that, the related quantitative accuracy issue remains mostly untouched. This state of affairs seems to be mainly a consequence of modeling shortcomings, notably regarding the lack of realistic closures. In this article predictions from a two-fluid model are compared to other published two-fluid model predictions applying the same Closures, and to experimental data. A particular matter of concern is whether the predictions are generated or not inside the statistical steady state regime that characterizes the riser flows. The present simulation was performed inside the statistical steady state regime. Time-averaged results are presented for different time-averaging intervals of 5, 10, 15 and 20 s inside the statistical steady state regime. The independence of the averaged results regarding the time-averaging interval is addressed and the results averaged over the intervals of 10 and 20 s are compared to both experiment and other two-fluid predictions. It is concluded that the two-fluid model used is still very crude, and cannot provide quantitative accurate results, at least for the particular case that was considered. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Telemedicine might increase the speed of diagnosis for leprosy and reduce the development of disabilities. We compared the accuracy of diagnosis made by telemedicine with that made by in-person examination. The cases were patients with suspected leprosy at eight public health clinics in outlying areas of the city of Sao Paulo. The case history and clinical examination data, and at least two clinical images for each patient, were stored in a web-based system developed for teledermatology. After the examination in the public clinic, patients then attended a teaching hospital for an in-person examination. The benchmark was the clinical examination of two dermatologists at the university hospital. From August 2005 to April 2006, 142 suspected cases of leprosy were forwarded to the website by the doctors at the clinics. Of these, 36 cases were excluded. There was overall agreement in the diagnosis of leprosy in 74% of the 106 remaining cases. The sensitivity was 78% and the specificity was 31%. Although the specificity was low, the study suggests that telemedicine may be a useful low-cost method for obtaining second opinions in programmes to control leprosy.
Resumo:
Aims We conducted a meta-analysis to evaluate the accuracy of quantitative stress myocardial contrast echocardiography (MCE) in coronary artery disease (CAD). Methods and results Database search was performed through January 2008. We included studies evaluating accuracy of quantitative stress MCE for detection of CAD compared with coronary angiography or single-photon emission computed tomography (SPECT) and measuring reserve parameters of A, beta, and A beta. Data from studies were verified and supplemented by the authors of each study. Using random effects meta-analysis, we estimated weighted mean difference (WMD), likelihood ratios (LRs), diagnostic odds ratios (DORs), and summary area under curve (AUC), all with 95% confidence interval (0). Of 1443 studies, 13 including 627 patients (age range, 38-75 years) and comparing MCE with angiography (n = 10), SPECT (n = 1), or both (n = 2) were eligible. WMD (95% CI) were significantly less in CAD group than no-CAD group: 0.12 (0.06-0.18) (P < 0.001), 1.38 (1.28-1.52) (P < 0.001), and 1.47 (1.18-1.76) (P < 0.001) for A, beta, and A beta reserves, respectively. Pooled LRs for positive test were 1.33 (1.13-1.57), 3.76 (2.43-5.80), and 3.64 (2.87-4.78) and LRs for negative test were 0.68 (0.55-0.83), 0.30 (0.24-0.38), and 0.27 (0.22-0.34) for A, beta, and A beta reserves, respectively. Pooled DORs were 2.09 (1.42-3.07), 15.11 (7.90-28.91), and 14.73 (9.61-22.57) and AUCs were 0.637 (0.594-0.677), 0.851 (0.828-0.872), and 0.859 (0.842-0.750) for A, beta, and A beta reserves, respectively. Conclusion Evidence supports the use of quantitative MCE as a non-invasive test for detection of CAD. Standardizing MCE quantification analysis and adherence to reporting standards for diagnostic tests could enhance the quality of evidence in this field.
Resumo:
Purpose: The aim of this research was to assess the dimensional accuracy of orbital prostheses based on reversed images generated by computer-aided design/computer-assisted manufacturing (CAD/CAM) using computed tomography (CT) scans. Materials and Methods: CT scans of the faces of 15 adults, men and women older than 25 years of age not bearing any congenital or acquired craniofacial defects, were processed using CAD software to produce 30 reversed three-dimensional models of the orbital region. These models were then processed using the CAM system by means of selective laser sintering to generate surface prototypes of the volunteers` orbital regions. Two moulage impressions of the faces of each volunteer were taken to manufacture 15 pairs of casts. Orbital defects were created on the right or left side of each cast. The surface prototypes were adapted to the casts and then flasked to fabricate silicone prostheses. The establishment of anthropometric landmarks on the orbital region and facial midline allowed for the data collection of 31 linear measurements, used to assess the dimensional accuracy of the orbital prostheses and their location on the face. Results: The comparative analyses of the linear measurements taken from the orbital prostheses and the opposite sides that originated the surface prototypes demonstrated that the orbital prostheses presented similar vertical, transversal, and oblique dimensions, as well as similar depth. There was no transverse or oblique displacement of the prostheses. Conclusion: From a clinical perspective, the small differences observed after analyzing all 31 linear measurements did not indicate facial asymmetry. The dimensional accuracy of the orbital prostheses suggested that the CAD/CAM system assessed herein may be applicable for clinical purposes. Int J Prosthodont 2010;23:271-276.
Resumo:
Purpose: Orthodontic miniscrews are commonly used to achieve absolute anchorage during tooth movement. One of the most frequent complications is screw loss as a result of root contact. Increased precision during the process of miniscrew insertion would help prevent screw loss and potential root damage, improving treatment outcomes. Stereo lithographic surgical guides have been commonly used for prosthetic implants to increase the precision of insertion. The objective of this paper was to describe the use of a stereolithographic surgical guide suitable for one-component orthodontic miniscrews based on cone beam computed tomography (CBCT) data and to evaluate implant placement accuracy. Materials and Methods: Acrylic splints were adapted to the dental arches of four patients, and six radiopaque reference points were filled with gutta-percha. The patients were submitted to CBCT while they wore the occlusal splint. Another series of images was captured with the splint alone. After superimposition and segmentation, miniscrew insertion was simulated using planning software that allowed the user to check the implant position in all planes and in three dimensions. In a rapid-prototyping machine, a stereolithographic guide was fabricated with metallic sleeves located at the insertion points to allow for three-dimensional control of the pilot bur. The surgical guide was worn during surgery. After implant insertion, each patient was submitted to CBCT a second time to verify the implant position and the accuracy of the placement of the miniscrews. Results: The average differences between the planned and inserted positions for the ten miniscrews were 0.86 mm at the coronal end, 0.71 mm at the center, and 0.87 mm at the apical tip. The average angular discrepancy was 1.76 degrees. Conclusions: The use of stereolithographic surgical guides based on CBCT data allows for accurate orthodontic mini screw insertion without damaging neighboring anatomic structures. INT J ORAL MAXILLOFAC IMPLANTS 2011;26:860-865
Resumo:
The purpose of this study was to evaluate ex vivo the accuracy an electronic apex locator during root canal length determination in primary molars. Methods: One calibrated examiner determined the root canal length in 15 primary molars (total=34 root canals) with different stages of root resorption. Root canal length was measured both visually, with the placement of a K-file 1 mm short of the apical foramen or the apical resorption bevel, and electronically using an electronic apex locator (Digital Signal Processing). Data were analyzed statistically using the intraclass correlation (ICC) test. Results: Comparing the actual and electronic root canal length measurements in the primary teeth showed a high correlation (ICC=0.95) Conclusions: The Digital Signal Processing apex locator is useful and accurate for apex foramen location during root canal length measurement in primary molars. (Pediatr Dent 200937:320-2) Received April 75, 2008 vertical bar Lost Revision August 21, 2008 vertical bar Revision Accepted August 22, 2008
Resumo:
Aim To evaluate ex vivo the accuracy of two electronic apex locators during root canal length determination in primary incisor and molar teeth with different stages of physiological root resorption. Methodology One calibrated examiner determined the root canal length in 17 primary incisors and 16 primary molars (total of 57 root canals) with different stages of root resorption based on the actual canal length and using two electronic apex locators. Root canal length was measured both visually, with the placement of a K-file 1 mm short of the apical foramen or the apical resorption bevel, and electronically using two electronic apex locators (Root ZX II - J. Morita Corp. and Mini Apex Locator - SybronEndo) according to the manufacturers` instructions. Data were analysed statistically using the intraclass correlation (ICC) test. Results Comparison of the actual root canal length and the electronic root canal length measurements revealed high correlation (ICC = 0.99), regardless of the tooth type (single-rooted and multi-rooted teeth) or the presence/absence of physiological root resorption. Conclusions Root ZX II and Mini Apex Locator proved useful and accurate for apex foramen location during root canal length measurement in primary incisors and molars.
Resumo:
P>Aim To evaluate ex vivo the accuracy of the iPex multi-frequency electronic apex locator (NSK Ltd, Tokyo, Japan) for working length determination in primary molar teeth. Methodology One calibrated examiner determined the working length in 20 primary molar teeth (total of 33 root canals). Working length was measured both visually, with the placement of a K-file 1 mm short of the apical foramen or the most coronal limit of root resorption, and electronically using the electronic apex locator iPex, according to the manufacturers` instructions. Data were analysed statistically using the intraclass correlation (ICC) test. Results Comparison of the actual and the electronic measurements revealed high correlation (ICC = 0.99) between the methods, regardless of the presence or absence of physiological root resorption. Conclusions In this laboratory study, the iPex accurately identified the apical foramen or the apical opening location for working length measurement in primary molar teeth.
Resumo:
We present a catalogue of galaxy photometric redshifts and k-corrections for the Sloan Digital Sky Survey Data Release 7 (SDSS-DR7), available on the World Wide Web. The photometric redshifts were estimated with an artificial neural network using five ugriz bands, concentration indices and Petrosian radii in the g and r bands. We have explored our redshift estimates with different training sets, thus concluding that the best choice for improving redshift accuracy comprises the main galaxy sample (MGS), the luminous red galaxies and the galaxies of active galactic nuclei covering the redshift range 0 < z < 0.3. For the MGS, the photometric redshift estimates agree with the spectroscopic values within rms = 0.0227. The distribution of photometric redshifts derived in the range 0 < z(phot) < 0.6 agrees well with the model predictions. k-corrections were derived by calibration of the k-correct_v4.2 code results for the MGS with the reference-frame (z = 0.1) (g - r) colours. We adopt a linear dependence of k-corrections on redshift and (g - r) colours that provide suitable distributions of luminosity and colours for galaxies up to redshift z(phot) = 0.6 comparable to the results in the literature. Thus, our k-correction estimate procedure is a powerful, low computational time algorithm capable of reproducing suitable results that can be used for testing galaxy properties at intermediate redshifts using the large SDSS data base.
Resumo:
This work proposes and discusses an approach for inducing Bayesian classifiers aimed at balancing the tradeoff between the precise probability estimates produced by time consuming unrestricted Bayesian networks and the computational efficiency of Naive Bayes (NB) classifiers. The proposed approach is based on the fundamental principles of the Heuristic Search Bayesian network learning. The Markov Blanket concept, as well as a proposed ""approximate Markov Blanket"" are used to reduce the number of nodes that form the Bayesian network to be induced from data. Consequently, the usually high computational cost of the heuristic search learning algorithms can be lessened, while Bayesian network structures better than NB can be achieved. The resulting algorithms, called DMBC (Dynamic Markov Blanket Classifier) and A-DMBC (Approximate DMBC), are empirically assessed in twelve domains that illustrate scenarios of particular interest. The obtained results are compared with NB and Tree Augmented Network (TAN) classifiers, and confinn that both proposed algorithms can provide good classification accuracies and better probability estimates than NB and TAN, while being more computationally efficient than the widely used K2 Algorithm.
Resumo:
When missing data occur in studies designed to compare the accuracy of diagnostic tests, a common, though naive, practice is to base the comparison of sensitivity, specificity, as well as of positive and negative predictive values on some subset of the data that fits into methods implemented in standard statistical packages. Such methods are usually valid only under the strong missing completely at random (MCAR) assumption and may generate biased and less precise estimates. We review some models that use the dependence structure of the completely observed cases to incorporate the information of the partially categorized observations into the analysis and show how they may be fitted via a two-stage hybrid process involving maximum likelihood in the first stage and weighted least squares in the second. We indicate how computational subroutines written in R may be used to fit the proposed models and illustrate the different analysis strategies with observational data collected to compare the accuracy of three distinct non-invasive diagnostic methods for endometriosis. The results indicate that even when the MCAR assumption is plausible, the naive partial analyses should be avoided.