180 resultados para ERROR


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The crystalline structure of transition-metals (TM) has been widely known for several decades, however, our knowledge on the atomic structure of TM clusters is still far from satisfactory, which compromises an atomistic understanding of the reactivity of TM clusters. For example, almost all density functional theory (DFT) calculations for TM clusters have been based on local (local density approximation-LDA) and semilocal (generalized gradient approximation-GGA) exchange-correlation functionals, however, it is well known that plain DFT fails to correct the self-interaction error, which affects the properties of several systems. To improve our basic understanding of the atomic and electronic properties of TM clusters, we report a DFT study within two nonlocal functionals, namely, the hybrid HSE (Heyd, Scuseria, and Ernzerhof) and GGA + U functionals, of the structural and electronic properties of the Co(13), Rh(13), and Hf(13) clusters. For Co(13) and Rh(13), we found that improved exchange-correlation functionals decrease the stability of open structures such as the hexagonal bilayer (HBL) and double simple-cubic (DSC) compared with the compact icosahedron (ICO) structure, however, DFT-GGA, DFT-GGA + U, and DFT-HSE yield very similar results for Hf(13). Thus, our results suggest that the DSC structure obtained by several plain DFT calculations for Rh(13) can be improved by the use of improved functionals. Using the sd hybridization analysis, we found that a strong hybridization favors compact structures, and hence, a correct description of the sd hybridization is crucial for the relative energy stability. For example, the sd hybridization decreases for HBL and DSC and increases for ICO in the case of Co(13) and Rh(13), while for Hf(13), the sd hybridization decreases for all configurations, and hence, it does not affect the relative stability among open and compact configurations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Identifying local similarity between two or more sequences, or identifying repeats occurring at least twice in a sequence, is an essential part in the analysis of biological sequences and of their phylogenetic relationship. Finding such fragments while allowing for a certain number of insertions, deletions, and substitutions, is however known to be a computationally expensive task, and consequently exact methods can usually not be applied in practice. Results: The filter TUIUIU that we introduce in this paper provides a possible solution to this problem. It can be used as a preprocessing step to any multiple alignment or repeats inference method, eliminating a possibly large fraction of the input that is guaranteed not to contain any approximate repeat. It consists in the verification of several strong necessary conditions that can be checked in a fast way. We implemented three versions of the filter. The first is simply a straightforward extension to the case of multiple sequences of an application of conditions already existing in the literature. The second uses a stronger condition which, as our results show, enable to filter sensibly more with negligible (if any) additional time. The third version uses an additional condition and pushes the sensibility of the filter even further with a non negligible additional time in many circumstances; our experiments show that it is particularly useful with large error rates. The latter version was applied as a preprocessing of a multiple alignment tool, obtaining an overall time (filter plus alignment) on average 63 and at best 530 times smaller than before (direct alignment), with in most cases a better quality alignment. Conclusion: To the best of our knowledge, TUIUIU is the first filter designed for multiple repeats and for dealing with error rates greater than 10% of the repeats length.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Alternative splicing of gene transcripts greatly expands the functional capacity of the genome, and certain splice isoforms may indicate specific disease states such as cancer. Splice junction microarrays interrogate thousands of splice junctions, but data analysis is difficult and error prone because of the increased complexity compared to differential gene expression analysis. We present Rank Change Detection (RCD) as a method to identify differential splicing events based upon a straightforward probabilistic model comparing the over-or underrepresentation of two or more competing isoforms. RCD has advantages over commonly used methods because it is robust to false positive errors due to nonlinear trends in microarray measurements. Further, RCD does not depend on prior knowledge of splice isoforms, yet it takes advantage of the inherent structure of mutually exclusive junctions, and it is conceptually generalizable to other types of splicing arrays or RNA-Seq. RCD specifically identifies the biologically important cases when a splice junction becomes more or less prevalent compared to other mutually exclusive junctions. The example data is from different cell lines of glioblastoma tumors assayed with Agilent microarrays.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Extensive ab initio calculations using a complete active space second-order perturbation theory wavefunction, including scalar and spin-orbit relativistic effects with a quadruple-zeta quality basis set were used to construct an analytical potential energy surface (PES) of the ground state of the [H, O, I] system. A total of 5344 points were fit to a three-dimensional function of the internuclear distances, with a global root-mean-square error of 1.26 kcal mol(-1). The resulting PES describes accurately the main features of this system: the HOI and HIO isomers, the transition state between them, and all dissociation asymptotes. After a small adjustment, using a scaling factor on the internal coordinates of HOI, the frequencies calculated in this work agree with the experimental data available within 10 cm(-1). (C) 2011 American Institute of Physics. [doi: 10.1063/1.3615545]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Mutations in TP53 are common events during carcinogenesis. In addition to gene mutations, several reports have focused on TP53 polymorphisms as risk factors for malignant disease. Many studies have highlighted that the status of the TP53 codon 72 polymorphism could influence cancer susceptibility. However, the results have been inconsistent and various methodological features can contribute to departures from Hardy-Weinberg equilibrium, a condition that may influence the disease risk estimates. The most widely accepted method of detecting genotyping error is to confirm genotypes by sequencing and/or via a separate method. Results: We developed two new genotyping methods for TP53 codon 72 polymorphism detection: Denaturing High Performance Liquid Chromatography (DHPLC) and Dot Blot hybridization. These methods were compared with Restriction Fragment Length Polymorphism (RFLP) using two different restriction enzymes. We observed high agreement among all methodologies assayed. Dot-blot hybridization and DHPLC results were more highly concordant with each other than when either of these methods was compared with RFLP. Conclusions: Although variations may occur, our results indicate that DHPLC and Dot Blot hybridization can be used as reliable screening methods for TP53 codon 72 polymorphism detection, especially in molecular epidemiologic studies, where high throughput methodologies are required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

P>Soil bulk density values are needed to convert organic carbon content to mass of organic carbon per unit area. However, field sampling and measurement of soil bulk density are labour-intensive, costly and tedious. Near-infrared reflectance spectroscopy (NIRS) is a physically non-destructive, rapid, reproducible and low-cost method that characterizes materials according to their reflectance in the near-infrared spectral region. The aim of this paper was to investigate the ability of NIRS to predict soil bulk density and to compare its performance with published pedotransfer functions. The study was carried out on a dataset of 1184 soil samples originating from a reforestation area in the Brazilian Amazon basin, and conventional soil bulk density values were obtained with metallic ""core cylinders"". The results indicate that the modified partial least squares regression used on spectral data is an alternative method for soil bulk density predictions to the published pedotransfer functions tested in this study. The NIRS method presented the closest-to-zero accuracy error (-0.002 g cm-3) and the lowest prediction error (0.13 g cm-3) and the coefficient of variation of the validation sets ranged from 8.1 to 8.9% of the mean reference values. Nevertheless, further research is required to assess the limits and specificities of the NIRS method, but it may have advantages for soil bulk density predictions, especially in environments such as the Amazon forest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Southeastern Brazil has seen dramatic landscape modifications in recent decades, due to expansion of agriculture and urban areas; these changes have influenced the distribution and abundance of vertebrates. We developed predictive models of ecological and spatial distributions of capybaras (Hydrochoerus hydrochaeris) using ecological niche modeling. Most Occurrences of capybaras were in flat areas with water bodies Surrounded by sugarcane and pasture. More than 75% of the Piracicaba River basin was estimated as potentially habitable by capybara. The models had low omission error (2.3-3.4%), but higher commission error (91.0-98.5%); these ""model failures"" seem to be more related to local habitat characteristics than to spatial ones. The potential distribution of capybaras in the basin is associated with anthropogenic habitats, particularly with intensive land use for agriculture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For environmental quality assessment, INAA has been applied for determining chemical elements in small (200 mg) and large (200 g) samples of leaves from 200 trees. By applying the Ingamells` constant, the expected percent standard deviation was estimated in 0.9-2.2% for 200 mg samples. Otherwise, for composite samples (200 g), expected standard deviation varied from 0.5 to 10% in spite of analytical uncertainties ranging from 2 to 30%. Results thereby suggested the expression of the degree of representativeness as a source of uncertainty, contributing for increasing of the reliability of environmental studies mainly in the case of composite samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2003-2004, several food items were purchased from large commercial outlets in Coimbra, Portugal. Such items included meats (chicken, pork, beef), eggs, rice, beans and vegetables (tomato, carrot, potato, cabbage, broccoli, lettuce). Elemental analysis was carried out through INAA at the Technological and Nuclear Institute (ITN, Portugal), the Nuclear Energy Centre for Agriculture (CENA, Brazil), and the Nuclear Engineering Teaching Lab of the University of Texas at Austin (NETL, USA). At the latter two, INAA was also associated to Compton suppression. It can be concluded that by applying Compton suppression (1) the detection limits for arsenic, copper and potassium improved; (2) the counting-statistics error for molybdenum diminished; and (3) the long-lived zinc had its 1115-keV photopeak better defined. In general, the improvement sought by introducing Compton suppression in foodstuff analysis was not significant. Lettuce, cabbage and chicken (liver, stomach, heart) are the richest diets in terms of human nutrients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inductively coupled plasma optical emission spectrometers (ICP DES) allow fast simultaneous measurements of several spectral lines for multiple elements. The combination of signal intensities of two or more emission lines for each element may bring such advantages as improvement of the precision, the minimization of systematic errors caused by spectral interferences and matrix effects. In this work, signal intensities for several spectral lines were combined for the determination of Al, Cd, Co, Cr, Mn, Pb, and Zn in water. Afterwards, parameters for evaluation of the calibration model were calculated to select the combination of emission lines leading to the best accuracy (lowest values of PRESS-Predicted error sum of squares and RMSEP-Root means square error of prediction). Limits of detection (LOD) obtained using multiple lines were 7.1, 0.5, 4.4, 0.042, 3.3, 28 and 6.7 mu g L(-1) (n = 10) for Al, Cd. Co, Cr, Mn, Pb and Zn, respectively, in the presence of concomitants. On the other hand, the LOD established for the most intense emission line were 16. 0.7, 8.4, 0.074. 23, 26 and 9.6 mu g L(-1) (n = 10) for these same elements in the presence of concomitants. The accuracy of the developed procedure was demonstrated using water certified reference material. The use of multiple lines improved the sensitivity making feasible the determination of these analytes according to the target values required for the current environmental legislation for water samples and it was also demonstrated that measurements in multiple lines can also be employed as a tool to verify the accuracy of an analytical procedure in ICP DES. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Here, I investigate the use of Bayesian updating rules applied to modeling how social agents change their minds in the case of continuous opinion models. Given another agent statement about the continuous value of a variable, we will see that interesting dynamics emerge when an agent assigns a likelihood to that value that is a mixture of a Gaussian and a uniform distribution. This represents the idea that the other agent might have no idea about what is being talked about. The effect of updating only the first moments of the distribution will be studied, and we will see that this generates results similar to those of the bounded confidence models. On also updating the second moment, several different opinions always survive in the long run, as agents become more stubborn with time. However, depending on the probability of error and initial uncertainty, those opinions might be clustered around a central value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Science is a fundamental human activity and we trust its results because it has several error-correcting mechanisms. It is subject to experimental tests that are replicated by independent parts. Given the huge amount of information available and the information asymetry between producers and users of knowledge, scientists have to rely on the reports of others. This makes it possible for social effects to influence the scientific community. Here, an Opinion Dynamics agent model is proposed to describe this situation. The influence of Nature through experiments is described as an external field that acts on the experimental agents. We will see that the retirement of old scientists can be fundamental in the acceptance of a new theory. We will also investigate the interplay between social influence and observations. This will allow us to gain insight in the problem of when social effects can have negligible effects in the conclusions of a scientific community and when we should worry about them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim To describe the perceptions and attitudes of registered nurses (RNs) towards adverse events (AEs) in nursing care. Background The professionals` subjective perspectives should be taken into account for the prevention of AEs in care settings. Method Schutz`s social phenomenology was developed. Interviews were conducted with nine Intensive Care Unit RNs. Results The following five descriptive categories emerged: (1) the occurrence of AEs is inherent to the human condition but provokes a feeling of insecurity, (2) the occurrence of AEs indicates the existence of failures in health care systematization, (3) the professionals` attitudes towards AEs should be permeated by ethical principles; (4) the priority regarding AEs should be the mitigation of harm to patients, and (5) decisions regarding the communication of AEs were determined by the severity of the error. Conclusions The various subjective perspectives related to the occurrence of AEs requires a health care systematization with a focus on prevention. Ethical behaviour is essential for the patients` safety. Implications for nursing management Activities aimed at the prevention of AEs should be integrated jointly with both the professionals and the health care institution. A culture of safety, not punishment, and improvement in the quality of care provided to patients should be priorities.