898 resultados para error-prone PCR


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although postmortem CT suffices for diagnosing most forms of traumatic death, the examination of natural death is, to date, very difficult and error prone. The introduction of postmortem angiography has led to improved radiologic diagnoses of natural deaths. Nevertheless, histologic changes to tissues, an important aspect in traditional examination procedures, remain obscure even with CT and CT angiography. For this reason, we examined the accuracy of a minimally invasive procedure (i.e., CT angiography combined with biopsy) in diagnosing major findings and the cause of death in natural deaths.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A confocal imaging and image processing scheme is introduced to visualize and evaluate the spatial distribution of spectral information in tissue. The image data are recorded using a confocal laser-scanning microscope equipped with a detection unit that provides high spectral resolution. The processing scheme is based on spectral data, is less error-prone than intensity-based visualization and evaluation methods, and provides quantitative information on the composition of the sample. The method is tested and validated in the context of the development of dermal drug delivery systems, introducing a quantitative uptake indicator to compare the performances of different delivery systems is introduced. A drug penetration study was performed in vitro. The results show that the method is able to detect, visualize and measure spectral information in tissue. In the penetration study, uptake efficiencies of different experiment setups could be discriminated and quantitatively described. The developed uptake indicator is a step towards a quantitative assessment and, in a more general view apart from pharmaceutical research, provides valuable information on tissue composition. It can potentially be used for clinical in vitro and in vivo applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Genetic instability in mammalian cells can occur by many different mechanisms. In the absence of exogenous sources of DNA damage, the DNA structure itself has been implicated in genetic instability. When the canonical B-DNA helix is naturally altered to form a non-canonical DNA structure such as a Z-DNA or H-DNA, this can lead to genetic instability in the form of DNA double-strand breaks (DSBs) (1, 2). Our laboratory found that the stability of these non-B DNA structures was different in mammals versus Escherichia coli (E.coli) bacteria (1, 2). One explanation for the difference between these species may be a result of how DSBs are repaired within each species. Non-homologous end-joining (NHEJ) is primed to repair DSBs in mammalian cells, while bacteria that lack NHEJ (such as E.coli), utilize homologous recombination (HR) to repair DSBs. To investigate the role of the error-prone NHEJ repair pathway in DNA structure-induced genetic instability, E.coli cells were modified to express genes to allow for a functional NHEJ system under different HR backgrounds. The Mycobacterium tuberculosis NHEJ sufficient system is composed of Ku and Ligase D (LigD) (3). These inducible NHEJ components were expressed individually and together in E.coli cells, with or without functional HR (RecA/RecB), and the Z-DNA and H-DNA-induced mutations were characterized. The Z-DNA structure gave rise to higher mutation frequencies compared to the controls, regardless of the DSB repair pathway(s) available; however, the type of mutants produced after repair was greatly dictated on the available DSB repair system, indicated by the shift from 2% large-scale deletions in the total mutant population to 24% large-scale deletions when NHEJ was present (4). This suggests that NHEJ has a role in the large deletions induced by Z-DNA-forming sequences. H-DNA structure, however, did not exhibit an increase in mutagenesis in the newly engineered E.coli environment, suggesting the involvement of other factors in regulating H-DNA formation/stability in bacterial cells. Accurate repair by established DNA DSB repair pathways is essential to maintain the stability of eukaryotic and prokaryotic genomes and our results suggest that an error-prone NHEJ pathway was involved in non-B DNA structure-induced mutagenesis in both prokaryotes and eukaryotes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Manual counting of bacterial colony forming units (CFUs) on agar plates is laborious and error-prone. We therefore implemented a colony counting system with a novel segmentation algorithm to discriminate bacterial colonies from blood and other agar plates.A colony counter hardware was designed and a novel segmentation algorithm was written in MATLAB. In brief, pre-processing with Top-Hat-filtering to obtain a uniform background was followed by the segmentation step, during which the colony images were extracted from the blood agar and individual colonies were separated. A Bayes classifier was then applied to count the final number of bacterial colonies as some of the colonies could still be concatenated to form larger groups. To assess accuracy and performance of the colony counter, we tested automated colony counting of different agar plates with known CFU numbers of S. pneumoniae, P. aeruginosa and M. catarrhalis and showed excellent performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Debuggers are crucial tools for developing object-oriented software systems as they give developers direct access to the running systems. Nevertheless, traditional debuggers rely on generic mechanisms to explore and exhibit the execution stack and system state, while developers reason about and formulate domain-specific questions using concepts and abstractions from their application domains. This creates an abstraction gap between the debugging needs and the debugging support leading to an inefficient and error-prone debugging effort. To reduce this gap, we propose a framework for developing domain-specific debuggers called the Moldable Debugger. The Moldable Debugger is adapted to a domain by creating and combining domain-specific debugging operations with domain-specific debugging views, and adapts itself to a domain by selecting, at run time, appropriate debugging operations and views. We motivate the need for domain-specific debugging, identify a set of key requirements and show how our approach improves debugging by adapting the debugger to several domains.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Attractive business cases in various application fields contribute to the sustained long-term interest in indoor localization and tracking by the research community. Location tracking is generally treated as a dynamic state estimation problem, consisting of two steps: (i) location estimation through measurement, and (ii) location prediction. For the estimation step, one of the most efficient and low-cost solutions is Received Signal Strength (RSS)-based ranging. However, various challenges - unrealistic propagation model, non-line of sight (NLOS), and multipath propagation - are yet to be addressed. Particle filters are a popular choice for dealing with the inherent non-linearities in both location measurements and motion dynamics. While such filters have been successfully applied to accurate, time-based ranging measurements, dealing with the more error-prone RSS based ranging is still challenging. In this work, we address the above issues with a novel, weighted likelihood, bootstrap particle filter for tracking via RSS-based ranging. Our filter weights the individual likelihoods from different anchor nodes exponentially, according to the ranging estimation. We also employ an improved propagation model for more accurate RSS-based ranging, which we suggested in recent work. We implemented and tested our algorithm in a passive localization system with IEEE 802.15.4 signals, showing that our proposed solution largely outperforms a traditional bootstrap particle filter.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nonsense-mediated decay (NMD) degrades aberrant transcripts containing premature termination codons (PTCs). The T-cell receptor (TCR) locus undergoes error-prone rearrangements that frequently acquire PTCs. Transcripts harboring PTCs from this locus are downregulated much more than transcripts from non-rearranging genes. Efficient splicing is essential for this robust downregulation. ^ Here I show that TCR NMD is unique in another respect: it is not impaired by RNAi-mediated depletion of the NMD factor UPF3b. This differentiates TCR transcripts from classical NMD (assayed using β-globin or triose phosphate isomerase transcripts), which does depend on UPF3b. Depletion of UPF3a, which encodes a gene related to UPF3b, also had no effect on TCR NMD. Mapping experiments identified TCR sequences that when deleted or mutated caused a switch to UPF3b dependence. Since UPF3b dependence was invariably accompanied by less efficient RNA splicing, this suggests that UPF3b-dependent NMD occurs when transcripts are generated by inefficient splicing. Microarray analysis revealed the existence of many NMD-targeted mRNAs from wild-type genes whose downregulation is impervious to UPF3b depletion. This suggests the existence of an alternative NMD pathway independent of UPF3b that is widely used to downregulate the level of both normal and mutant transcripts. ^ During the course of my studies, I also found that the function of UPF3a is fundamentally distinct from that of UPF3b in several aspects. First, classical NMD failed to be impaired by UPF3a depletion, whereas it was reversed by UPF3b depletion. Second, UPF3a depletion had no effect on NMD elicited by tethered UPF2, whereas UPF3b depletion blocked this response. Thus, UPF3a does not function in classical NMD. Third, UPF3b depletion upregulated the expression of UPF3a, whereas UPF3a depletion had no effect on UPF3b expression. This suggests that a UPF3b-mediated feedback network exists that regulates the UPF3a expression. Lastly, UPF3a depletion but not UPF3b depletion significantly upregulated TCR precursor RNAs. This suggests that UPF3a, not UPF3b, functions in the surveillance of precursor RNAs, which typically contain many PTCs in the introns. Collectively, my data suggests that UPF3a and UPF3b are not functionally redundant, as previously thought, but instead have separable functions. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Translation termination as a result of premature nonsense codon-incorporation in a RNA transcript can lead to the production of aberrant proteins with gain-of-function or dominant negative properties that could have deletrious effects on the cell. T-cell Receptor (TCR) genes acquire premature termination codons two-thirds of the time as a result of the error-prone programmed rearrangement events that normally occur during T-cell development. My studies have focused on the fate of TCR precursor mRNAs in response to in-frame nonsense mutations. ^ Previous published studies from our laboratory have shown that TCR precursor mRNAs are subject to nonsense mediated upregulation of pre-mRNA (NMUP). In this dissertation, I performed substitution and deletion analysis to characterize specific regions of TCR which are required to elicit NMUP. I performed frame- and factor-dependence studies to determine its relationship with other nonsense codon induced responses using several approaches including (i) translation dependence studies (ii) deletion and mutational analysis, as well as (iii) siRNA mediated knockdown of proteins involved. I also addressed the underlying molecular mechanism for this pre-mRNA upregulation by (i) RNA half-life studies using a c-fos inducible promoter, and (ii) a variety of assays to determine pre-mRNA splicing efficiency. ^ Using these approaches, I have identified a region of TCR that is both necessary and sufficient to elicit (NMUP). I have also found that neither cytoplasmic translation machinery nor the protein UPF1 are involved in eliciting this nuclear event. I have shown that the NMUP can be induced not only by nonsense and frameshift mutations, but also missense mutations that disrupt a cis splicing element in the exon that contains the mutation. However, the effect of nonsense mutations on pre-mRNA is unique and distinguishable from that of missense mutations in that nonsense mutations can upregulate pre-mRNA in a frame-dependent manner. Lastly, I provide evidence that NMUP occurs by a mechanism in which nonsense mutations inhibit the splicing of introns. In summary, I have found that TCR precursor mRNAs are subject to multiple forces involving both RNA splicing and translation that can either increase or decrease the levels of these precursor mRNAs. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

ATP-dependent chromatin remodeling has been shown to be critical for transcription and DNA repair. However, the involvement of ATP-dependent chromatin remodeling in DNA replication remains poorly defined. Interestingly, we found that the INO80 chromatin-remodeling complex is directly involved in the DNA damage tolerance pathways activated during DNA replication. DNA damage tolerance is important for genomic stability and is controlled by formation of either mono-ubiquitinated or multi-ubiquitinated PCNA, which respectively induce error prone or error-free replication bypass of the lesions. In addition, homologous recombination (HR) mediated by the Rad51 pathway is also involved in the DNA damage tolerance pathways. ^ We found that INO80 is specifically recruited to replication origins during S phase in a genome-wide fashion. In addition, DNA combing analysis shows INO80 is required for the resumption of replication at stalled forks induced by methyl methane-sulfonate (MMS). Mechanistically, we find that INO80 is required for PCNA ubiquitination as well as for Rad51 mediated processing of replication forks after MMS treatment. Furthermore, chromatin immunoprecipitation at specific ARSs indicates INO80 is necessary for Rad18 and Rad51 recruitment to replication forks after MMS treatment. Moreover, 2D gel analysis shows INO80 is necessary to process Rad51 mediated intermediates at impeded replication forks. ^ In conclusion, our findings establish a novel role of a chromatin-remodeling complex in DNA damage tolerance pathways and suggest that chromatin remodeling is fundamentally important to ensure faithful replication of DNA and genome stability in eukaryotes. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nonsense-mediated mRNA decay (NMD) is a quality control mechanism that degrades aberrant mRNAs harboring premature termination codons (PTCs). Two out of three T-cell receptor β (TCRβ) transcripts carry PTCs as a result of error-prone programmed rearrangements that occur at this locus during lymphocyte maturation. PTCs decrease TCRβ mRNA levels to a much greater extent than mRNAs transcribed from non-rearranging genes. This robust decrease in TCRβ mRNA levels is not a unique characteristic of the T-cell environment or the TCRβ promoter. The simplest explanation for this is that PTC-bearing TCRβ mRNAs elicit a stronger NMD response. An alternative explanation is NMD collaborates with another mechanism to dramatically decrease PTC-bearing TCRβ mRNA levels. ^ In my dissertation, I investigated the molecular mechanism behind the strong decrease in TCRβ mRNA levels triggered by PTCs. To determine the location of this response, I performed mRNA half-life analysis and found that PTCs elicited more rapid TCRβ mRNA decay in the nuclear fraction, not the cytoplasmic fraction. Although decay was restricted to the nuclear fraction, PTC-bearing TCRβ transcript levels were extremely low in the cytoplasm, a phenomenon that I named the nonsense-codon induced partitioning shift (NIPS). I established that NIPS shares several qualities with NMD, including its dependence on translation and NMD factors. Several lines of evidence suggested that NIPS results from PTCs eliciting retention of TCRβ transcripts in the nuclear fraction. This retention, as well as rapid TCRβ mRNA decay, most likely occurs in either the nucleoplasm or the outer nuclear membrane, based on analysis of nuclear and cytoplasmic markers in the highly purified nuclei I used for my studies. To further address the location of decay, I asked whether nuclear or cytoplasmic RNA decay factors mediated the destruction of PTC-bearing mRNAs. My results suggested that a nuclear component of the 3'-to-5' exosome, as well as an endonucleolytic activity, are involved in the destruction of PTC-containing TCRβ mRNAs. Individual endogenous NMD substrates had differential requirements for nuclear and cytoplasmic exonucleases. In summary, my results provide evidence that PTCs trigger multiple mechanisms involving multiple decay factors to remove and regulate mRNAs in mammalian cells. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Three methodologies to assess As bioaccessibility were evaluated using playgroundsoil collected from 16 playgrounds in Madrid, Spain: two (Simplified Bioaccessibility Extraction Test: SBET, and hydrochloric acid-extraction: HCl) assess gastric-only bioaccessibility and the third (Physiologically Based Extraction Test: PBET) evaluates mouth–gastric–intestinal bioaccessibility. Aqua regia-extractable (pseudo total) As contents, which are routinely employed in riskassessments, were used as the reference to establish the following percentages of bioaccessibility: SBET – 63.1; HCl – 51.8; PBET – 41.6, the highest values associated with the gastric-only extractions. For Madridplaygroundsoils – characterised by a very uniform, weakly alkaline pH, and low Fe oxide and organic matter contents – the statistical analysis of the results indicates that, in contrast with other studies, the highest percentage of As in the samples was bound to carbonates and/or present as calcium arsenate. As opposed to the As bound to Fe oxides, this As is readily released in the gastric environment as the carbonate matrix is decomposed and calcium arsenate is dissolved, but some of it is subsequently sequestered in unavailable forms as the pH is raised to 5.5 to mimic intestinal conditions. The HCl extraction can be used as a simple and reliable (i.e. low residual standard error) proxy for the more expensive, time consuming, and error-prone PBET methodology. The HCl method would essentially halve the estimate of carcinogenic risk for children playing in Madridplaygroundsoils, providing a more representative value of associated risk than the pseudo-total concentrations used at present

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The implementation of abstract machines involves complex decisions regarding, e.g., data representation, opcodes, or instruction specialization levéis, all of which affect the final performance of the emulator and the size of the bytecode programs in ways that are often difficult to foresee. Besides, studying alternatives by implementing abstract machine variants is a time-consuming and error-prone task because of the level of complexity and optimization of competitive implementations, which makes them generally difficult to understand, maintain, and modify. This also makes it hard to genérate specific implementations for particular purposes. To ameliorate those problems, we propose a systematic approach to the automatic generation of implementations of abstract machines. Different parts of their definition (e.g., the instruction set or the infernal data and bytecode representation) are kept sepárate and automatically assembled in the generation process. Alternative versions of the abstract machine are therefore easier to produce, and variants of their implementation can be created mechanically, with specific characteristics for a particular application if necessary. We illustrate the practicality of the approach by reporting on an implementation of a generator of production-quality WAMs which are specialized for executing a particular fixed (set of) program(s). The experimental results show that the approach is effective in reducing emulator size.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Biomedical ontologies are key elements for building up the Life Sciences Semantic Web. Reusing and building biomedical ontologies requires flexible and versatile tools to manipulate them efficiently, in particular for enriching their axiomatic content. The Ontology Pre Processor Language (OPPL) is an OWL-based language for automating the changes to be performed in an ontology. OPPL augments the ontologists’ toolbox by providing a more efficient, and less error-prone, mechanism for enriching a biomedical ontology than that obtained by a manual treatment. Results We present OPPL-Galaxy, a wrapper for using OPPL within Galaxy. The functionality delivered by OPPL (i.e. automated ontology manipulation) can be combined with the tools and workflows devised within the Galaxy framework, resulting in an enhancement of OPPL. Use cases are provided in order to demonstrate OPPL-Galaxy’s capability for enriching, modifying and querying biomedical ontologies. Conclusions Coupling OPPL-Galaxy with other bioinformatics tools of the Galaxy framework results in a system that is more than the sum of its parts. OPPL-Galaxy opens a new dimension of analyses and exploitation of biomedical ontologies, including automated reasoning, paving the way towards advanced biological data analyses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La mayoría de las aplicaciones forestales del escaneo laser aerotransportado (ALS, del inglés airborne laser scanning) requieren la integración y uso simultaneo de diversas fuentes de datos, con el propósito de conseguir diversos objetivos. Los proyectos basados en sensores remotos normalmente consisten en aumentar la escala de estudio progresivamente a lo largo de varias fases de fusión de datos: desde la información más detallada obtenida sobre un área limitada (la parcela de campo), hasta una respuesta general de la cubierta forestal detectada a distancia de forma más incierta pero cubriendo un área mucho más amplia (la extensión cubierta por el vuelo o el satélite). Todas las fuentes de datos necesitan en ultimo termino basarse en las tecnologías de sistemas de navegación global por satélite (GNSS, del inglés global navigation satellite systems), las cuales son especialmente erróneas al operar por debajo del dosel forestal. Otras etapas adicionales de procesamiento, como la ortorectificación, también pueden verse afectadas por la presencia de vegetación, deteriorando la exactitud de las coordenadas de referencia de las imágenes ópticas. Todos estos errores introducen ruido en los modelos, ya que los predictores se desplazan de la posición real donde se sitúa su variable respuesta. El grado por el que las estimaciones forestales se ven afectadas depende de la dispersión espacial de las variables involucradas, y también de la escala utilizada en cada caso. Esta tesis revisa las fuentes de error posicional que pueden afectar a los diversos datos de entrada involucrados en un proyecto de inventario forestal basado en teledetección ALS, y como las propiedades del dosel forestal en sí afecta a su magnitud, aconsejando en consecuencia métodos para su reducción. También se incluye una discusión sobre las formas más apropiadas de medir exactitud y precisión en cada caso, y como los errores de posicionamiento de hecho afectan a la calidad de las estimaciones, con vistas a una planificación eficiente de la adquisición de los datos. La optimización final en el posicionamiento GNSS y de la radiometría del sensor óptico permitió detectar la importancia de este ultimo en la predicción de la desidad relativa de un bosque monoespecífico de Pinus sylvestris L. ABSTRACT Most forestry applications of airborne laser scanning (ALS) require the integration and simultaneous use of various data sources, pursuing a variety of different objectives. Projects based on remotely-sensed data generally consist in upscaling data fusion stages: from the most detailed information obtained for a limited area (field plot) to a more uncertain forest response sensed over a larger extent (airborne and satellite swath). All data sources ultimately rely on global navigation satellite systems (GNSS), which are especially error-prone when operating under forest canopies. Other additional processing stages, such as orthorectification, may as well be affected by vegetation, hence deteriorating the accuracy of optical imagery’s reference coordinates. These errors introduce noise to the models, as predictors displace from their corresponding response. The degree to which forest estimations are affected depends on the spatial dispersion of the variables involved and the scale used. This thesis reviews the sources of positioning errors which may affect the different inputs involved in an ALS-assisted forest inventory project, and how the properties of the forest canopy itself affects their magnitude, advising on methods for diminishing them. It is also discussed how accuracy should be assessed, and how positioning errors actually affect forest estimation, toward a cost-efficient planning for data acquisition. The final optimization in positioning the GNSS and optical image allowed to detect the importance of the latter in predicting relative density in a monospecific Pinus sylvestris L. forest.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Predicting failures in a distributed system based on previous events through logistic regression is a standard approach in literature. This technique is not reliable, though, in two situations: in the prediction of rare events, which do not appear in enough proportion for the algorithm to capture, and in environments where there are too many variables, as logistic regression tends to overfit on this situations; while manually selecting a subset of variables to create the model is error- prone. On this paper, we solve an industrial research case that presented this situation with a combination of elastic net logistic regression, a method that allows us to automatically select useful variables, a process of cross-validation on top of it and the application of a rare events prediction technique to reduce computation time. This process provides two layers of cross- validation that automatically obtain the optimal model complexity and the optimal mode l parameters values, while ensuring even rare events will be correctly predicted with a low amount of training instances. We tested this method against real industrial data, obtaining a total of 60 out of 80 possible models with a 90% average model accuracy.