945 resultados para ERROR AUTOCORRELATION
Resumo:
The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (~0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.
Resumo:
In this work a p-adaptation (modification of the polynomial order) strategy based on the minimization of the truncation error is developed for high order discontinuous Galerkin methods. The truncation error is approximated by means of a truncation error estimation procedure and enables the identification of mesh regions that require adaptation. Three truncation error estimation approaches are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. Fine solutions, which are obtained by enriching the polynomial order, are required to solve the numerical problem with adequate accuracy. For the three truncation error estimation methods the former needs time converged solutions, while the last two rely on non-converged solutions, which lead to faster computations. Based on these truncation error estimation methods, algorithms for mesh adaptation were designed and tested. Firstly, an isotropic adaptation approach is presented, which leads to equally distributed polynomial orders in different coordinate directions. This first implementation is improved by incorporating a method to extrapolate the truncation error. This results in a significant reduction of computational cost. Secondly, the employed high order method permits the spatial decoupling of the estimated errors and enables anisotropic p-adaptation. The incorporation of anisotropic features leads to meshes with different polynomial orders in the different coordinate directions such that flow-features related to the geometry are resolved in a better manner. These adaptations result in a significant reduction of degrees of freedom and computational cost, while the amount of improvement depends on the test-case. Finally, this anisotropic approach is extended by using error extrapolation which leads to an even higher reduction in computational cost. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. The main result is that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of a factor of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively. RESUMEN En este trabajo se ha desarrollado una estrategia de adaptación-p (modificación del orden polinómico) para métodos Galerkin discontinuo de alto orden basada en la minimización del error de truncación. El error de truncación se estima utilizando el método tau-estimation. El estimador permite la identificación de zonas de la malla que requieren adaptación. Se distinguen tres técnicas de estimación: a posteriori, quasi a priori y quasi a priori con correción. Todas las estrategias requieren una solución obtenida en una malla fina, la cual es obtenida aumentando de manera uniforme el orden polinómico. Sin embargo, mientras que el primero requiere que esta solución esté convergida temporalmente, el resto utiliza soluciones no convergidas, lo que se traduce en un menor coste computacional. En este trabajo se han diseñado y probado algoritmos de adaptación de malla basados en métodos tau-estimation. En primer lugar, se presenta un algoritmo de adaptacin isótropo, que conduce a discretizaciones con el mismo orden polinómico en todas las direcciones espaciales. Esta primera implementación se mejora incluyendo un método para extrapolar el error de truncación. Esto resulta en una reducción significativa del coste computacional. En segundo lugar, el método de alto orden permite el desacoplamiento espacial de los errores estimados, permitiendo la adaptación anisotropica. Las mallas obtenidas mediante esta técnica tienen distintos órdenes polinómicos en cada una de las direcciones espaciales. La malla final tiene una distribución óptima de órdenes polinómicos, los cuales guardan relación con las características del flujo que, a su vez, depenen de la geometría. Estas técnicas de adaptación reducen de manera significativa los grados de libertad y el coste computacional. Por último, esta aproximación anisotropica se extiende usando extrapolación del error de truncación, lo que conlleva un coste computational aún menor. Las estrategias se verifican y se comparan en téminors de precisión y coste computacional utilizando las ecuaciones de Euler y Navier Stokes. Los dos métodos quasi a priori consiguen una reducción significativa del coste computacional en comparación con aumento uniforme del orden polinómico. En concreto, para una capa límite viscosa, obtenemos una mejora en tiempo de computación de 6.6 y 7.6 respectivamente, para las aproximaciones quasi-a priori y quasi-a priori con corrección.
Resumo:
Esta Tesis presenta un nuevo método para filtrar errores en bases de datos multidimensionales. Este método no precisa ninguna información a priori sobre la naturaleza de los errores. En concreto, los errrores no deben ser necesariamente pequeños, ni de distribución aleatoria ni tener media cero. El único requerimiento es que no estén correlados con la información limpia propia de la base de datos. Este nuevo método se basa en una extensión mejorada del método básico de reconstrucción de huecos (capaz de reconstruir la información que falta de una base de datos multidimensional en posiciones conocidas) inventado por Everson y Sirovich (1995). El método de reconstrucción de huecos mejorado ha evolucionado como un método de filtrado de errores de dos pasos: en primer lugar, (a) identifica las posiciones en la base de datos afectadas por los errores y después, (b) reconstruye la información en dichas posiciones tratando la información de éstas como información desconocida. El método resultante filtra errores O(1) de forma eficiente, tanto si son errores aleatorios como sistemáticos e incluso si su distribución en la base de datos está concentrada o esparcida por ella. Primero, se ilustra el funcionamiento delmétodo con una base de datosmodelo bidimensional, que resulta de la dicretización de una función transcendental. Posteriormente, se presentan algunos casos prácticos de aplicación del método a dos bases de datos tridimensionales aerodinámicas que contienen la distribución de presiones sobre un ala a varios ángulos de ataque. Estas bases de datos resultan de modelos numéricos calculados en CFD. ABSTRACT A method is presented to filter errors out in multidimensional databases. The method does not require any a priori information about the nature the errors. In particular, the errors need not to be small, neither random, nor exhibit zero mean. Instead, they are only required to be relatively uncorrelated to the clean information contained in the database. The method is based on an improved extension of a seminal iterative gappy reconstruction method (able to reconstruct lost information at known positions in the database) due to Everson and Sirovich (1995). The improved gappy reconstruction method is evolved as an error filtering method in two steps, since it is adapted to first (a) identify the error locations in the database and then (b) reconstruct the information in these locations by treating the associated data as gappy data. The resultingmethod filters out O(1) errors in an efficient fashion, both when these are random and when they are systematic, and also both when they concentrated and when they are spread along the database. The performance of the method is first illustrated using a two-dimensional toymodel database resulting fromdiscretizing a transcendental function and then tested on two CFD-calculated, three-dimensional aerodynamic databases containing the pressure coefficient on the surface of a wing for varying values of the angle of attack. A more general performance analysis of the method is presented with the intention of quantifying the randomness factor the method admits maintaining a correct performance and secondly, quantifying the size of error the method can detect. Lastly, some improvements of the method are proposed with their respective verification.
Resumo:
RNA viruses evolve rapidly. One source of this ability to rapidly change is the apparently high mutation frequency in RNA virus populations. A high mutation frequency is a central tenet of the quasispecies theory. A corollary of the quasispecies theory postulates that, given their high mutation frequency, animal RNA viruses may be susceptible to error catastrophe, where they undergo a sharp drop in viability after a modest increase in mutation frequency. We recently showed that the important broad-spectrum antiviral drug ribavirin (currently used to treat hepatitis C virus infections, among others) is an RNA virus mutagen, and we proposed that ribavirin's antiviral effect is by forcing RNA viruses into error catastrophe. However, a direct demonstration of error catastrophe has not been made for ribavirin or any RNA virus mutagen. Here we describe a direct demonstration of error catastrophe by using ribavirin as the mutagen and poliovirus as a model RNA virus. We demonstrate that ribavirin's antiviral activity is exerted directly through lethal mutagenesis of the viral genetic material. A 99.3% loss in viral genome infectivity is observed after a single round of virus infection in ribavirin concentrations sufficient to cause a 9.7-fold increase in mutagenesis. Compiling data on both the mutation levels and the specific infectivities of poliovirus genomes produced in the presence of ribavirin, we have constructed a graph of error catastrophe showing that normal poliovirus indeed exists at the edge of viability. These data suggest that RNA virus mutagens may represent a promising new class of antiviral drugs.
Resumo:
High affinity antibodies are generated in mice and humans by means of somatic hypermutation (SHM) of variable (V) regions of Ig genes. Mutations with rates of 10−5–10−3 per base pair per generation, about 106-fold above normal, are targeted primarily at V-region hot spots by unknown mechanisms. We have measured mRNA expression of DNA polymerases ι, η, and ζ by using cultured Burkitt's lymphoma (BL)2 cells. These cells exhibit 5–10-fold increases in heavy-chain V-region mutations targeted only predominantly to RGYW (R = A or G, Y = C or T, W = T or A) hot spots if costimulated with T cells and IgM crosslinking, the presumed in vivo requirements for SHM. An ∼4-fold increase pol ι mRNA occurs within 12 h when cocultured with T cells and surface IgM crosslinking. Induction of pols η and ζ occur with T cells, IgM crosslinking, or both stimuli. The fidelity of pol ι was measured at RGYW hot- and non-hot-spot sequences situated at nicks, gaps, and double-strand breaks. Pol ι formed T⋅G mispairs at a frequency of 10−2, consistent with SHM-generated C to T transitions, with a 3-fold increased error rate in hot- vs. non-hot-spot sequences for the single-nucleotide overhang. The T cell and IgM crosslinking-dependent induction of pol ι at 12 h may indicate an SHM “triggering” event has occurred. However, pols ι, η, and ζ are present under all conditions, suggesting that their presence is not sufficient to generate mutations because both T cell and IgM stimuli are required for SHM induction.
Resumo:
DNA polymerase V, composed of a heterotrimer of the DNA damage-inducible UmuC and UmuD\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{2}^{^{\prime}}}}\end{equation*}\end{document} proteins, working in conjunction with RecA, single-stranded DNA (ssDNA)-binding protein (SSB), β sliding clamp, and γ clamp loading complex, are responsible for most SOS lesion-targeted mutations in Escherichia coli, by catalyzing translesion synthesis (TLS). DNA polymerase II, the product of the damage-inducible polB (dinA ) gene plays a pivotal role in replication-restart, a process that bypasses DNA damage in an error-free manner. Replication-restart takes place almost immediately after the DNA is damaged (≈2 min post-UV irradiation), whereas TLS occurs after pol V is induced ≈50 min later. We discuss recent data for pol V-catalyzed TLS and pol II-catalyzed replication-restart. Specific roles during TLS for pol V and each of its accessory factors have been recently determined. Although the precise molecular mechanism of pol II-dependent replication-restart remains to be elucidated, it has recently been shown to operate in conjunction with RecFOR and PriA proteins.
Resumo:
Spatial structure of genetic variation within populations, an important interacting influence on evolutionary and ecological processes, can be analyzed in detail by using spatial autocorrelation statistics. This paper characterizes the statistical properties of spatial autocorrelation statistics in this context and develops estimators of gene dispersal based on data on standing patterns of genetic variation. Large numbers of Monte Carlo simulations and a wide variety of sampling strategies are utilized. The results show that spatial autocorrelation statistics are highly predictable and informative. Thus, strong hypothesis tests for neutral theory can be formulated. Most strikingly, robust estimators of gene dispersal can be obtained with practical sample sizes. Details about optimal sampling strategies are also described.
Resumo:
The Escherichia coli dnaQ gene encodes the proofreading 3' exonuclease (epsilon subunit) of DNA polymerase III holoenzyme and is a critical determinant of chromosomal replication fidelity. We constructed by site-specific mutagenesis a mutant, dnaQ926, by changing two conserved amino acid residues (Asp-12-->Ala and Glu-14-->Ala) in the Exo I motif, which, by analogy to other proofreading exonucleases, is essential for the catalytic activity. When residing on a plasmid, dnaQ926 confers a strong, dominant mutator phenotype, suggesting that the protein, although deficient in exonuclease activity, still binds to the polymerase subunit (alpha subunit or dnaE gene product). When dnaQ926 was transferred to the chromosome, replacing the wild-type gene, the cells became inviable. However, viable dnaQ926 strains could be obtained if they contained one of the dnaE alleles previously characterized in our laboratory as antimutator alleles or if it carried a multicopy plasmid containing the E. coli mutL+ gene. These results suggest that loss of proofreading exonuclease activity in dnaQ926 is lethal due to excessive error rates (error catastrophe). Error catastrophe results from both the loss of proofreading and the subsequent saturation of DNA mismatch repair. The probability of lethality by excessive mutation is supported by calculations estimating the number of inactivating mutations in essential genes per chromosome replication.
Resumo:
Vicarious trial-and-error (VTE) is a term that Muenzinger and Tolman used to describe the rat's conflict-like behavior before responding to choice. Recently, VTE was proposed as a mechanism alternative to the concept of "cognitive map" in accounts of hippocampal function. That is, many phenomena of impaired learning and memory related to hippocampal interventions may be explained by behavioral first principles: reduced conflicting, incipient, pre-choice tendencies to approach and avoid. The nonspatial black-white discrimination learning and VTE behavior of the rat were investigated. Hippocampal-lesioned and sham-lesioned animals were trained for 25 days (20 trials per day) starting at 60 days of age. Each movement of the head from one discriminative stimulus to the other was counted as a VTE instance. Lesioned rats had fewer VTEs than sham controls, and the former learned much more slowly or never learned. After learning, VTE frequency declined. Male and female rats showed no significant differences in VTE behavior or discrimination learning.
Resumo:
In the Monte Carlo simulation of both lattice field theories and of models of statistical mechanics, identities verified by exact mean values, such as Schwinger-Dyson equations, Guerra relations, Callen identities, etc., provide well-known and sensitive tests of thermalization bias as well as checks of pseudo-random-number generators. We point out that they can be further exploited as control variates to reduce statistical errors. The strategy is general, very simple, and almost costless in CPU time. The method is demonstrated in the twodimensional Ising model at criticality, where the CPU gain factor lies between 2 and 4.
Resumo:
To validate clinically an algorithm for correcting the error in the keratometric estimation of corneal power by using a variable keratometric index of refraction (nk) in a normal healthy population.