62 resultados para Efficient error correction
Resumo:
In this work, the effects of indenter tip roundness oil the load-depth indentation curves were analyzed using finite element modeling. The tip roundness level was Studied based on the ratio between tip radius and maximum penetration depth (R/h(max)), which varied from 0.02 to 1. The proportional Curvature constant (C), the exponent of depth during loading (alpha), the initial unloading slope (S), the correction factor (beta), the level of piling-up or sinking-in (h(c)/h(max)), and the ratio h(max)/h(f) are shown to be strongly influenced by the ratio R/h(max). The hardness (H) was found to be independent of R/h(max) in the range studied. The Oliver and Pharr method was successful in following the variation of h(c)/h(max) with the ratio R/h(max) through the variation of S with the ratio R/h(max). However, this work confirmed the differences between the hardness values calculated using the Oliver-Pharr method and those obtained directly from finite element calculations; differences which derive from the error in area calculation that Occurs when given combinations of indented material properties are present. The ratio of plastic work to total work (W(p)/W(t)) was found to be independent of the ratio R/h(max), which demonstrates that the methods for the Calculation of mechanical properties based on the *indentation energy are potentially not Susceptible to errors caused by tip roundness.
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
Hardy-Weinberg Equilibrium (HWE) is an important genetic property that populations should have whenever they are not observing adverse situations as complete lack of panmixia, excess of mutations, excess of selection pressure, etc. HWE for decades has been evaluated; both frequentist and Bayesian methods are in use today. While historically the HWE formula was developed to examine the transmission of alleles in a population from one generation to the next, use of HWE concepts has expanded in human diseases studies to detect genotyping error and disease susceptibility (association); Ryckman and Williams (2008). Most analyses focus on trying to answer the question of whether a population is in HWE. They do not try to quantify how far from the equilibrium the population is. In this paper, we propose the use of a simple disequilibrium coefficient to a locus with two alleles. Based on the posterior density of this disequilibrium coefficient, we show how one can conduct a Bayesian analysis to verify how far from HWE a population is. There are other coefficients introduced in the literature and the advantage of the one introduced in this paper is the fact that, just like the standard correlation coefficients, its range is bounded and it is symmetric around zero (equilibrium) when comparing the positive and the negative values. To test the hypothesis of equilibrium, we use a simple Bayesian significance test, the Full Bayesian Significance Test (FBST); see Pereira, Stern andWechsler (2008) for a complete review. The disequilibrium coefficient proposed provides an easy and efficient way to make the analyses, especially if one uses Bayesian statistics. A routine in R programs (R Development Core Team, 2009) that implements the calculations is provided for the readers.
Resumo:
Background and Purpose: Radiofrequency (RF) ablation of renal tumors is a major technique for tumor cell destruction while preserving healthy renal parenchyma. There is no consensus in the literature regarding the optimal temperature, impedance, and time for RF application for effective cell destruction. This study investigated two variables while keeping time unchanged: Temperature for RF cell destruction and tissue impedance in dog kidneys. Materials and Methods: Sixteen dogs had renal punctures through videolaparoscopy for RF interstitial tissue ablation. A RF generator was applied for 10 minutes to the dog's kidney at different target temperatures: 80 degrees C, 90 degrees C, and 100 degrees C. On postoperative day14, the animals were sacrificed and nephrectomized. All lesions were macroscopically and microscopically examined. The bioelectrical impedance was evaluated at three different temperatures. Results: Renal injuries were wider and deeper at 90 degrees C (P < 0.001), and they were similar at 80 degrees C and 100 degrees C. The bioelectrical impedance was lower at 90 degrees C than at the temperatures of 80 degrees C and 100 degrees C (P < 0.001). Viable cells in the RF ablation tissue area were not found in the microscopic examination. Conclusion: The most effective cell destruction in terms of width and depth was achieved at 90 degrees C, which was also the optimal temperature for tissue impedance. RF ablation of renal cells eliminated all viable cells.
Resumo:
Objective: To identify the skeletal, dentoalveolar, and soft tissue changes that occur during Class II correction with the Cantilever Bite Jumper (CBJ). Materials and Methods: This prospective cephalometric study was conducted on 26 subjects with Class II division 1 malocclusion treated with the CBJ appliance. A comparison was made with 26 untreated subjects with Class II malocclusion. Lateral head films from before and after CBJ therapy were analyzed through conventional cephalometric and Johnston analyses. Results: Class II correction was accomplished by means of 2.9 mm apical base change, 1.5 mm distal movement of the maxillary molars, and 1.1 mm mesial movement of the mandibular molars. The CBJ exhibited good control of the vertical dimension. The main side effect of the CBJ is that the vertical force vectors of the telescope act as lever arms and can produce mesial tipping of the mandibular molars. Conclusions: The Cantilever Bite Jumper corrects Class II malocclusions with similar percentages of skeletal and dentoalveolar effects. (Angle Orthod. 2009:79;)
Resumo:
This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.
Resumo:
We report a highly efficient switch built from an organic molecule assembled between single-wall carbon nanotube electrodes. We theoretically show that changes in the distance between the electrodes alter the molecular conformation within the gap, affecting in a dramatic way the electronic and charge transport properties, with an on/off ratio larger than 300. This opens up the perspective of combining molecular electronics with carbon nanotubes, bringing great possibilities for the design of nanodevices.
Resumo:
Background: Identifying local similarity between two or more sequences, or identifying repeats occurring at least twice in a sequence, is an essential part in the analysis of biological sequences and of their phylogenetic relationship. Finding such fragments while allowing for a certain number of insertions, deletions, and substitutions, is however known to be a computationally expensive task, and consequently exact methods can usually not be applied in practice. Results: The filter TUIUIU that we introduce in this paper provides a possible solution to this problem. It can be used as a preprocessing step to any multiple alignment or repeats inference method, eliminating a possibly large fraction of the input that is guaranteed not to contain any approximate repeat. It consists in the verification of several strong necessary conditions that can be checked in a fast way. We implemented three versions of the filter. The first is simply a straightforward extension to the case of multiple sequences of an application of conditions already existing in the literature. The second uses a stronger condition which, as our results show, enable to filter sensibly more with negligible (if any) additional time. The third version uses an additional condition and pushes the sensibility of the filter even further with a non negligible additional time in many circumstances; our experiments show that it is particularly useful with large error rates. The latter version was applied as a preprocessing of a multiple alignment tool, obtaining an overall time (filter plus alignment) on average 63 and at best 530 times smaller than before (direct alignment), with in most cases a better quality alignment. Conclusion: To the best of our knowledge, TUIUIU is the first filter designed for multiple repeats and for dealing with error rates greater than 10% of the repeats length.
Resumo:
omega-Transaminases have been evaluated as biocatalysts in the reductive amination of organoselenium acetophenones to the corresponding amines, and in the kinetic resolution of racemic organoselenium amines. Kinetic resolution proved to be more efficient than the asymmetric reductive amination. By using these methodologies we were able to obtain both amine enantiomers in high enantiomeric excess (up to 99%). Derivatives of the obtained optically pure o-selenium 1-phenylethyl amine were evaluated as ligands in the palladium-catalyzed asymmetric alkylation, giving the alkylated product in up to 99% ee.
Resumo:
Direct borohydride fuel cells are promising high energy density portable generators. However, their development remains limited by the complexity of the anodic reaction: The borohydride oxidation reaction (BOR) kinetics is slow and occurs at high overvoltages, while it may compete with the heterogeneous hydrolysis of BH(4)(-). Nevertheless, one usually admits that gold is rather inactive toward the heterogeneous hydrolysis of BH(4)(-) and presents some activity regarding the BOR, therefore yielding to the complete eight-electron BOR. In the present paper, by coupling online mass spectrometry to electrochemistry, we in situ monitored the H(2) yield during BOR experiments on sputtered gold electrodes. Our results show non-negligible H(2) generation on Au on the whole BOR potential range (0-0.8 V vs reversible hydrogen electrode), thus revealing that gold cannot be considered as a faradaic-efficient BOR electrocatalyst. We further propose a relevant reaction pathway for the BOR on gold that accounts for these findings.
Resumo:
The 'blue copper' enzyme bilirubin oxidase from Myrothecium verrucaria shows significantly enhanced adsorption on a pyrolytic graphite 'edge' (PGE) electrode that has been covalently modified with naphthyl-2-carboxylate functionalities by diazonium coupling. Modified electrodes coated with bilirubin oxidase show electrocatalytic voltammograms for the direct, four-electron reduction of O(2) by bilirubin oxidase with up to four times the current density of an unmodified PGE electrode. Electrocatalytic voltammograms measured with a rapidly rotating electrode (to remove effects of O(2) diffusion limitation) have a complex shape (an almost linear dependence of current on potential below pH 6) that is similar regardless of how PGE is chemically modified. Importantly, the same waveform is observed if bilirubin oxidase is adsorbed on Au(111) or Pt(111) single-crystal electrodes (at which activity is short-lived). The electrocatalytic behavior of bilirubin oxidase, including its enhanced response on chemically-modified PGE, therefore reflects inherent properties that do not depend on the electrode material. The variation of voltammetric waveshapes and potential-dependent (O(2)) Michaelis constants with pH and analysis in terms of the dispersion model are consistent with a change in rate-determining step over the pH range 5-8: at pH 5, the high activity is limited by the rate of interfacial redox cycling of the Type 1 copper whereas at pH 8 activity is much lower and a sigmoidal shape is approached, showing that interfacial electron transfer is no longer a limiting factor. The electrocatalytic activity of bilirubin oxidase on Pt(111) appears as a prominent pre-wave to electrocatalysis by Pt surface atoms, thus substantiating in a single, direct experiment that the minimum overpotential required for O(2) reduction by the enzyme is substantially smaller than required at Pt. At pH 8, the onset of O(2) reduction lies within 0.14 V of the four-electron O(2)/2H(2)O potential.
Resumo:
Background: Leptin-deficient mice (Lep(ob)/Lep(ob), also known as ob/ob) are of great importance for studies of obesity, diabetes and other correlated pathologies. Thus, generation of animals carrying the Lep(ob) gene mutation as well as additional genomic modifications has been used to associate genes with metabolic diseases. However, the infertility of Lep(ob)/Lep(ob) mice impairs this kind of breeding experiment. Objective: To propose a new method for production of Lep(ob)/Lep(ob) animals and Lep(ob)/Lep(ob)-derived animal models by restoring the fertility of Lep(ob)/Lep(ob) mice in a stable way through white adipose tissue transplantations. Methods: For this purpose, 1 g of peri-gonadal adipose tissue from lean donors was used in subcutaneous transplantations of Lep(ob)/Lep(ob) animals and a crossing strategy was established to generate Lep(ob)/Lep(ob)-derived mice. Results: The presented method reduced by four times the number of animals used to generate double transgenic models (from about 20 to 5 animals per double mutant produced) and minimized the number of genotyping steps (from 3 to 1 genotyping step, reducing the number of Lep gene genotyping assays from 83 to 6). Conclusion: The application of the adipose transplantation technique drastically improves both the production of Lep(ob)/Lep(ob) animals and the generation of Lep(ob)/Lep(ob)-derived animal models. International Journal of Obesity (2009) 33, 938-944; doi: 10.1038/ijo.2009.95; published online 16 June 2009
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.
Resumo:
In this paper, we address the problem of scheduling jobs in a no-wait flowshop with the objective of minimising the total completion time. This problem is well-known for being nondeterministic polynomial-time hard, and therefore, most contributions to the topic focus on developing algorithms able to obtain good approximate solutions for the problem in a short CPU time. More specifically, there are various constructive heuristics available for the problem [such as the ones by Rajendran and Chaudhuri (Nav Res Logist 37: 695-705, 1990); Bertolissi (J Mater Process Technol 107: 459-465, 2000), Aldowaisan and Allahverdi (Omega 32: 345-352, 2004) and the Chins heuristic by Fink and Voa (Eur J Operat Res 151: 400-414, 2003)], as well as a successful local search procedure (Pilot-1-Chins). We propose a new constructive heuristic based on an analogy with the two-machine problem in order to select the candidate to be appended in the partial schedule. The myopic behaviour of the heuristic is tempered by exploring the neighbourhood of the so-obtained partial schedules. The computational results indicate that the proposed heuristic outperforms existing ones in terms of quality of the solution obtained and equals the performance of the time-consuming Pilot-1-Chins.