991 resultados para Error-resilient Applications
Resumo:
Different surface treatment protocols of poly(methyl methacrylate) have been proposed to improve the adhesion of silicone-based resilient denture liners to poly(methyl methacrylate) surfaces. The purpose of this study was to evaluate the effect of different poly(methyl methacrylate) surface treatments on the adhesion of silicone-based resilient denture liners. Poly(methyl methacrylate) specimens were prepared and divided into 4 treatment groups: no treatment (control), methyl methacrylate for 180 seconds, acetone for 30 seconds, and ethyl acetate for 60 seconds. Poly(methyl methacrylate) disks (30.0 × 5.0 mm; n = 10) were evaluated regarding surface roughness and surface free energy. To evaluate tensile bond strength, the resilient material was applied between 2 treated poly(methyl methacrylate) bars (60.0 × 5.0 × 5.0 mm; n = 20 for each group) to form a 2-mm-thick layer. Data were analyzed by 1-way ANOVA and the Tukey honestly significant difference tests (α = .05). A Pearson correlation test verified the influence of surface properties on tensile bond strength. Failure type was assessed, and the poly(methyl methacrylate) surface treatment modifications were visualized with scanning electron microscopy. The surface roughness was increased (P < .05) by methyl methacrylate treatment. For the acetone and ethyl acetate groups, the surface free energy decreased (P < .05). The tensile bond strength was higher for the methyl methacrylate and ethyl acetate groups (P < .05). No correlation was found regarding surface properties and tensile bond strength. Specimens treated with acetone and methyl methacrylate presented a cleaner surface, whereas the ethyl acetate treatment produced a porous topography. The methyl methacrylate and ethyl acetate surface treatment protocols improved the adhesion of a silicone-based resilient denture liner to poly(methyl methacrylate).
Resumo:
77
Resumo:
The goal of this cross-sectional observational study was to quantify the pattern-shift visual evoked potentials (VEP) and the thickness as well as the volume of retinal layers using optical coherence tomography (OCT) across a cohort of Parkinson's disease (PD) patients and age-matched controls. Forty-three PD patients and 38 controls were enrolled. All participants underwent a detailed neurological and ophthalmologic evaluation. Idiopathic PD cases were included. Cases with glaucoma or increased intra-ocular pressure were excluded. Patients were assessed by VEP and high-resolution Fourier-domain OCT, which quantified the inner and outer thicknesses of the retinal layers. VEP latencies and the thicknesses of the retinal layers were the main outcome measures. The mean age, with standard deviation (SD), of the PD patients and controls were 63.1 (7.5) and 62.4 (7.2) years, respectively. The patients were predominantly in the initial Hoehn-Yahr (HY) disease stages (34.8% in stage 1 or 1.5, and 55.8 % in stage 2). The VEP latencies and the thicknesses as well as the volumes of the retinal inner and outer layers of the groups were similar. A negative correlation between the retinal thickness and the age was noted in both groups. The thickness of the retinal nerve fibre layer (RNFL) was 102.7 μm in PD patients vs. 104.2 μm in controls. The thicknesses of retinal layers, VEP, and RNFL of PD patients were similar to those of the controls. Despite the use of a representative cohort of PD patients and high-resolution OCT in this study, further studies are required to establish the validity of using OCT and VEP measurements as the anatomic and functional biomarkers for the evaluation of retinal and visual pathways in PD patients.
Resumo:
Paper has become increasingly recognized as a very interesting substrate for the construction of microfluidic devices, with potential application in a variety of areas, including health diagnosis, environmental monitoring, immunoassays and food safety. The aim of this review is to present a short history of analytical systems constructed from paper, summarize the main advantages and disadvantages of fabrication techniques, exploit alternative methods of detection such as colorimetric, electrochemical, photoelectrochemical, chemiluminescence and electrochemiluminescence, as well as to take a closer look at the novel achievements in the field of bioanalysis published during the last 2 years. Finally, the future trends for production of such devices are discussed.
Resumo:
Ten common doubts of chemistry students and professionals about their statistical applications are discussed. The use of the N-1 denominator instead of N is described for the standard deviation. The statistical meaning of the denominators of the root mean square error of calibration (RMSEC) and root mean square error of validation (RMSEV) are given for researchers using multivariate calibration methods. The reason why scientists and engineers use the average instead of the median is explained. Several problematic aspects about regression and correlation are treated. The popular use of triplicate experiments in teaching and research laboratories is seen to have its origin in statistical confidence intervals. Nonparametric statistics and bootstrapping methods round out the discussion.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
In this paper, space adaptivity is introduced to control the error in the numerical solution of hyperbolic systems of conservation laws. The reference numerical scheme is a new version of the discontinuous Galerkin method, which uses an implicit diffusive term in the direction of the streamlines, for stability purposes. The decision whether to refine or to unrefine the grid in a certain location is taken according to the magnitude of wavelet coefficients, which are indicators of local smoothness of the numerical solution. Numerical solutions of the nonlinear Euler equations illustrate the efficiency of the method. © Springer 2005.
Resumo:
Technical evaluation of analytical data is of extreme relevance considering it can be used for comparisons with environmental quality standards and decision-making as related to the management of disposal of dredged sediments and the evaluation of salt and brackish water quality in accordance with CONAMA 357/05 Resolution. It is, therefore, essential that the project manager discusses the environmental agency's technical requirements with the laboratory contracted for the follow-up of the analysis underway and even with a view to possible re-analysis when anomalous data are identified. The main technical requirements are: (1) method quantitation limits (QLs) should fall below environmental standards; (2) analyses should be carried out in laboratories whose analytical scope is accredited by the National Institute of Metrology (INMETRO) or qualified or accepted by a licensing agency; (3) chain of custody should be provided in order to ensure sample traceability; (4) control charts should be provided to prove method performance; (5) certified reference material analysis or, if that is not available, matrix spike analysis, should be undertaken and (6) chromatograms should be included in the analytical report. Within this context and with a view to helping environmental managers in analytical report evaluation, this work has as objectives the discussion of the limitations of the application of SW 846 US EPA methods to marine samples, the consequences of having data based on method detection limits (MDL) and not sample quantitation limits (SQL), and present possible modifications of the principal method applied by laboratories in order to comply with environmental quality standards.
Resumo:
Colloidal particles have been used to template the electrosynthesis of several materials, such as semiconductors, metals and alloys. The method allows good control over the thickness of the resulting material by choosing the appropriate charge applied to the system, and it is able to produce high density deposited materials without shrinkage. These materials are a true model of the template structure and, due to the high surface areas obtained, are very promising for use in electrochemical applications. In the present work, the assembly of monodisperse polystyrene templates was conduced over gold, platinum and glassy carbon substrates in order to show the electrodeposition of an oxide, a conducting polymer and a hybrid inorganic-organic material with applications in the supercapacitor and sensor fields. The performances of the resulting nanostructured films have been compared with the analogue bulk material and the results achieved are depicted in this paper.
Resumo:
We describe the concept, the fabrication, and the most relevant properties of a piezoelectric-polymer system: Two fluoroethylenepropylene (FEP) films with good electret properties are laminated around a specifically designed and prepared polytetrafluoroethylene (PTFE) template at 300 degrees C. After removing the PTFE template, a two-layer FEP film with open tubular channels is obtained. For electric charging, the two-layer FEP system is subjected to a high electric field. The resulting dielectric barrier discharges inside the tubular channels yield a ferroelectret with high piezoelectricity. d(33) coefficients of up to 160 pC/N have already been achieved on the ferroelectret films. After charging at suitable elevated temperatures, the piezoelectricity is stable at temperatures of at least 130 degrees C. Advantages of the transducer films include ease of fabrication at laboratory or industrial scales, a wide range of possible geometrical and processing parameters, straightforward control of the uniformity of the polymer system, flexibility, and versatility of the soft ferroelectrets, and a large potential for device applications e.g., in the areas of biomedicine, communications, production engineering, sensor systems, environmental monitoring, etc.
Resumo:
The effects of chromium or nickel oxide additions on the composition of Portland clinker were investigated by X-ray powder diffraction associated with pattern analysis by the Rietveld method. The co-processing of industrial waste in Portland cement plants is an alternative solution to the problem of final disposal of hazardous waste. Industrial waste containing chromium or nickel is hazardous and is difficult to dispose of. It was observed that in concentrations up to 1% in mass, the chromium or nickel oxide additions do not cause significant alterations in Portland clinker composition. (C) 2008 International Centre for Diffraction Data.
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
Hardy-Weinberg Equilibrium (HWE) is an important genetic property that populations should have whenever they are not observing adverse situations as complete lack of panmixia, excess of mutations, excess of selection pressure, etc. HWE for decades has been evaluated; both frequentist and Bayesian methods are in use today. While historically the HWE formula was developed to examine the transmission of alleles in a population from one generation to the next, use of HWE concepts has expanded in human diseases studies to detect genotyping error and disease susceptibility (association); Ryckman and Williams (2008). Most analyses focus on trying to answer the question of whether a population is in HWE. They do not try to quantify how far from the equilibrium the population is. In this paper, we propose the use of a simple disequilibrium coefficient to a locus with two alleles. Based on the posterior density of this disequilibrium coefficient, we show how one can conduct a Bayesian analysis to verify how far from HWE a population is. There are other coefficients introduced in the literature and the advantage of the one introduced in this paper is the fact that, just like the standard correlation coefficients, its range is bounded and it is symmetric around zero (equilibrium) when comparing the positive and the negative values. To test the hypothesis of equilibrium, we use a simple Bayesian significance test, the Full Bayesian Significance Test (FBST); see Pereira, Stern andWechsler (2008) for a complete review. The disequilibrium coefficient proposed provides an easy and efficient way to make the analyses, especially if one uses Bayesian statistics. A routine in R programs (R Development Core Team, 2009) that implements the calculations is provided for the readers.
Resumo:
Background Data and Objective: There is anecdotal evidence that low-level laser therapy (LLLT) may affect the development of muscular fatigue, minor muscle damage, and recovery after heavy exercises. Although manufacturers claim that cluster probes (LEDT) maybe more effective than single-diode lasers in clinical settings, there is a lack of head-to-head comparisons in controlled trials. This study was designed to compare the effect of single-diode LLLT and cluster LEDT before heavy exercise. Materials and Methods: This was a randomized, placebo-controlled, double-blind cross-over study. Young male volleyball players (n = 8) were enrolled and asked to perform three Wingate cycle tests after 4 x 30 sec LLLT or LEDT pretreatment of the rectus femoris muscle with either (1) an active LEDT cluster-probe (660/850 nm, 10/30mW), (2) a placebo cluster-probe with no output, and (3) a single-diode 810-nm 200-mW laser. Results: The active LEDT group had significantly decreased post-exercise creatine kinase (CK) levels (-18.88 +/- 41.48U/L), compared to the placebo cluster group (26.88 +/- 15.18U/L) (p < 0.05) and the active single-diode laser group (43.38 +/- 32.90U/L) (p<0.01). None of the pre-exercise LLLT or LEDT protocols enhanced performance on the Wingate tests or reduced post-exercise blood lactate levels. However, a non-significant tendency toward lower post-exercise blood lactate levels in the treated groups should be explored further. Conclusion: In this experimental set-up, only the active LEDT probe decreased post-exercise CK levels after the Wingate cycle test. Neither performance nor blood lactate levels were significantly affected by this protocol of pre-exercise LEDT or LLLT.