930 resultados para Protein Array Analysis -- methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a noninvasive technique for quantitative assessment of the integrity of blood-brain barrier and blood-spinal cord barrier (BSCB) in the presence of central nervous system pathologies. However, the results of DCE-MRI show substantial variability. The high variability can be caused by a number of factors including inaccurate T1 estimation, insufficient temporal resolution and poor contrast-to-noise ratio. My thesis work is to develop improved methods to reduce the variability of DCE-MRI results. To obtain fast and accurate T1 map, the Look-Locker acquisition technique was implemented with a novel and truly centric k-space segmentation scheme. In addition, an original multi-step curve fitting procedure was developed to increase the accuracy of T1 estimation. A view sharing acquisition method was implemented to increase temporal resolution, and a novel normalization method was introduced to reduce image artifacts. Finally, a new clustering algorithm was developed to reduce apparent noise in the DCE-MRI data. The performance of these proposed methods was verified by simulations and phantom studies. As part of this work, the proposed techniques were applied to an in vivo DCE-MRI study of experimental spinal cord injury (SCI). These methods have shown robust results and allow quantitative assessment of regions with very low vascular permeability. In conclusion, applications of the improved DCE-MRI acquisition and analysis methods developed in this thesis work can improve the accuracy of the DCE-MRI results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate quantitative estimation of exposure using retrospective data has been one of the most challenging tasks in the exposure assessment field. To improve these estimates, some models have been developed using published exposure databases with their corresponding exposure determinants. These models are designed to be applied to reported exposure determinants obtained from study subjects or exposure levels assigned by an industrial hygienist, so quantitative exposure estimates can be obtained. ^ In an effort to improve the prediction accuracy and generalizability of these models, and taking into account that the limitations encountered in previous studies might be due to limitations in the applicability of traditional statistical methods and concepts, the use of computer science- derived data analysis methods, predominantly machine learning approaches, were proposed and explored in this study. ^ The goal of this study was to develop a set of models using decision trees/ensemble and neural networks methods to predict occupational outcomes based on literature-derived databases, and compare, using cross-validation and data splitting techniques, the resulting prediction capacity to that of traditional regression models. Two cases were addressed: the categorical case, where the exposure level was measured as an exposure rating following the American Industrial Hygiene Association guidelines and the continuous case, where the result of the exposure is expressed as a concentration value. Previously developed literature-based exposure databases for 1,1,1 trichloroethane, methylene dichloride and, trichloroethylene were used. ^ When compared to regression estimations, results showed better accuracy of decision trees/ensemble techniques for the categorical case while neural networks were better for estimation of continuous exposure values. Overrepresentation of classes and overfitting were the main causes for poor neural network performance and accuracy. Estimations based on literature-based databases using machine learning techniques might provide an advantage when they are applied to other methodologies that combine `expert inputs' with current exposure measurements, like the Bayesian Decision Analysis tool. The use of machine learning techniques to more accurately estimate exposures from literature-based exposure databases might represent the starting point for the independence from the expert judgment.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The spatial and temporal dynamics of seagrasses have been well studied at the leaf to patch scales, however, the link to large spatial extent landscape and population dynamics is still unresolved in seagrass ecology. Traditional remote sensing approaches have lacked the temporal resolution and consistency to appropriately address this issue. This study uses two high temporal resolution time-series of thematic seagrass cover maps to examine the spatial and temporal dynamics of seagrass at both an inter- and intra-annual time scales, one of the first globally to do so at this scale. Previous work by the authors developed an object-based approach to map seagrass cover level distribution from a long term archive of Landsat TM and ETM+ images on the Eastern Banks (~200 km**2), Moreton Bay, Australia. In this work a range of trend and time-series analysis methods are demonstrated for a time-series of 23 annual maps from 1988 to 2010 and a time-series of 16 monthly maps during 2008-2010. Significant new insight was presented regarding the inter- and intra-annual dynamics of seagrass persistence over time, seagrass cover level variability, seagrass cover level trajectory, and change in area of seagrass and cover levels over time. Overall we found that there was no significant decline in total seagrass area on the Eastern Banks, but there was a significant decline in seagrass cover level condition. A case study of two smaller communities within the Eastern Banks that experienced a decline in both overall seagrass area and condition are examined in detail, highlighting possible differences in environmental and process drivers. We demonstrate how trend and time-series analysis enabled seagrass distribution to be appropriately assessed in context of its spatial and temporal history and provides the ability to not only quantify change, but also describe the type of change. We also demonstrate the potential use of time-series analysis products to investigate seagrass growth and decline as well as the processes that drive it. This study demonstrates clear benefits over traditional seagrass mapping and monitoring approaches, and provides a proof of concept for the use of trend and time-series analysis of remotely sensed seagrass products to benefit current endeavours in seagrass ecology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents and illustrates a practical approach to the dataow analysis of constraint logic programming languages using abstract interpretation. It is rst argued that from the framework point of view it suces to propose relatively simple extensions of traditional analysis methods which have already been proved useful and practical and for exist. This is shown by proposing a simple extension of Bruynooghes traditional framework which allows it to analyze constraint logic programs. Then and using this generalized framework two abstract domains and their required abstract functions are presented the rst abstract domain approximates deniteness information and the second one freeness. Finally an approach for cobining those domains is proposed The two domains and their combination have been implemented and used in the analysis of CLP and Prolog III applications. Results from this implementation showing its performance and accuracy are also presented

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents and illustrates with an example a practical approach to the dataflow analysis of programs written in constraint logic programming (CLP) languages using abstract interpretation. It is first argued that, from the framework point of view, it sufnces to propose relatively simple extensions of traditional analysis methods which have already been proved useful and practical and for which efncient fixpoint algorithms have been developed. This is shown by proposing a simple but quite general extensión of Bruynooghe's traditional framework to the analysis of CLP programs. In this extensión constraints are viewed not as "suspended goals" but rather as new information in the store, following the traditional view of CLP. Using this approach, and as an example of its use, a complete, constraint system independent, abstract analysis is presented for approximating definiteness information. The analysis is in fact of quite general applicability. It has been implemented and used in the analysis of CLP(R) and Prolog-III applications. Results from the implementation of this analysis are also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

habilidades de comprensión y resolución de problemas. Tanto es así que se puede afirmar con rotundidad que no existe el método perfecto para cada una de las etapas de desarrollo y tampoco existe el modelo de ciclo de vida perfecto: cada nuevo problema que se plantea es diferente a los anteriores en algún aspecto y esto hace que técnicas que funcionaron en proyectos anteriores fracasen en los proyectos nuevos. Por ello actualmente se realiza un planteamiento integrador que pretende utilizar en cada caso las técnicas, métodos y herramientas más acordes con las características del problema planteado al ingeniero. Bajo este punto de vista se plantean nuevos problemas. En primer lugar está la selección de enfoques de desarrollo. Si no existe el mejor enfoque, ¿cómo se hace para elegir el más adecuado de entre el conjunto de los existentes? Un segundo problema estriba en la relación entre las etapas de análisis y diseño. En este sentido existen dos grandes riesgos. Por un lado, se puede hacer un análisis del problema demasiado superficial, con lo que se produce una excesiva distancia entre el análisis y el diseño que muchas veces imposibilita el paso de uno a otro. Por otro lado, se puede optar por un análisis en términos del diseño que provoca que no cumpla su objetivo de centrarse en el problema, sino que se convierte en una primera versión de la solución, lo que se conoce como diseño preliminar. Como consecuencia de lo anterior surge el dilema del análisis, que puede plantearse como sigue: para cada problema planteado hay que elegir las técnicas más adecuadas, lo que requiere que se conozcan las características del problema. Para ello, a su vez, se debe analizar el problema, eligiendo una técnica antes de conocerlo. Si la técnica utiliza términos de diseño entonces se ha precondicionado el paradigma de solución y es posible que no sea el más adecuado para resolver el problema. En último lugar están las barreras pragmáticas que frenan la expansión del uso de métodos con base formal, dificultando su aplicación en la práctica cotidiana. Teniendo en cuenta todos los problemas planteados, se requieren métodos de análisis del problema que cumplan una serie de objetivos, el primero de los cuales es la necesidad de una base formal, con el fin de evitar la ambigüedad y permitir verificar la corrección de los modelos generados. Un segundo objetivo es la independencia de diseño: se deben utilizar términos que no tengan reflejo directo en el diseño, para que permitan centrarse en las características del problema. Además los métodos deben permitir analizar problemas de cualquier tipo: algorítmicos, de soporte a la decisión o basados en el conocimiento, entre otros. En siguiente lugar están los objetivos relacionados con aspectos pragmáticos. Por un lado deben incorporar una notación textual formal pero no matemática, de forma que se facilite su validación y comprensión por personas sin conocimientos matemáticos profundos pero al mismo tiempo sea lo suficientemente rigurosa para facilitar su verificación. Por otro lado, se requiere una notación gráfica complementaria para representar los modelos, de forma que puedan ser comprendidos y validados cómodamente por parte de los clientes y usuarios. Esta tesis doctoral presenta SETCM, un método de análisis que cumple estos objetivos. Para ello se han definido todos los elementos que forman los modelos de análisis usando una terminología independiente de paradigmas de diseño y se han formalizado dichas definiciones usando los elementos fundamentales de la teoría de conjuntos: elementos, conjuntos y relaciones entre conjuntos. Por otro lado se ha definido un lenguaje formal para representar los elementos de los modelos de análisis – evitando en lo posible el uso de notaciones matemáticas – complementado con una notación gráfica que permite representar de forma visual las partes más relevantes de los modelos. El método propuesto ha sido sometido a una intensa fase de experimentación, durante la que fue aplicado a 13 casos de estudio, todos ellos proyectos reales que han concluido en productos transferidos a entidades públicas o privadas. Durante la experimentación se ha evaluado la adecuación de SETCM para el análisis de problemas de distinto tamaño y en sistemas cuyo diseño final usaba paradigmas diferentes e incluso paradigmas mixtos. También se ha evaluado su uso por analistas con distinto nivel de experiencia – noveles, intermedios o expertos – analizando en todos los casos la curva de aprendizaje, con el fin de averiguar si es fácil de aprender su uso, independientemente de si se conoce o no alguna otra técnica de análisis. Por otro lado se ha estudiado la capacidad de ampliación de modelos generados con SETCM, para comprobar si permite abordar proyectos realizados en varias fases, en los que el análisis de una fase consista en ampliar el análisis de la fase anterior. En resumidas cuentas, se ha tratado de evaluar la capacidad de integración de SETCM en una organización como la técnica de análisis preferida para el desarrollo de software. Los resultados obtenidos tras esta experimentación han sido muy positivos, habiéndose alcanzado un alto grado de cumplimiento de todos los objetivos planteados al definir el método.---ABSTRACT---Software development is an inherently complex activity, which requires specific abilities of problem comprehension and solving. It is so difficult that it can even be said that there is no perfect method for each of the development stages and that there is no perfect life cycle model: each new problem is different to the precedent ones in some respect and the techniques that worked in other problems can fail in the new ones. Given that situation, the current trend is to integrate different methods, tools and techniques, using the best suited for each situation. This trend, however, raises some new problems. The first one is the selection of development approaches. If there is no a manifestly single best approach, how does one go about choosing an approach from the array of available options? The second problem has to do with the relationship between the analysis and design phases. This relation can lead to two major risks. On one hand, the analysis could be too shallow and far away from the design, making it very difficult to perform the transition between them. On the other hand, the analysis could be expressed using design terminology, thus becoming more a kind of preliminary design than a model of the problem to be solved. In third place there is the analysis dilemma, which can be expressed as follows. The developer has to choose the most adequate techniques for each problem, and to make this decision it is necessary to know the most relevant properties of the problem. This implies that the developer has to analyse the problem, choosing an analysis method before really knowing the problem. If the chosen technique uses design terminology then the solution paradigm has been preconditioned and it is possible that, once the problem is well known, that paradigm wouldn’t be the chosen one. The last problem consists of some pragmatic barriers that limit the applicability of formal based methods, making it difficult to use them in current practice. In order to solve these problems there is a need for analysis methods that fulfil several goals. The first one is the need of a formal base, which prevents ambiguity and allows the verification of the analysis models. The second goal is design-independence: the analysis should use a terminology different from the design, to facilitate a real comprehension of the problem under study. In third place the analysis method should allow the developer to study different kinds of problems: algorithmic, decision-support, knowledge based, etc. Next there are two goals related to pragmatic aspects. Firstly, the methods should have a non mathematical but formal textual notation. This notation will allow people without deep mathematical knowledge to understand and validate the resulting models, without losing the needed rigour for verification. Secondly, the methods should have a complementary graphical notation to make more natural the understanding and validation of the relevant parts of the analysis. This Thesis proposes such a method, called SETCM. The elements conforming the analysis models have been defined using a terminology that is independent from design paradigms. Those terms have been then formalised using the main concepts of the set theory: elements, sets and correspondences between sets. In addition, a formal language has been created, which avoids the use of mathematical notations. Finally, a graphical notation has been defined, which can visually represent the most relevant elements of the models. The proposed method has been thoroughly tested during the experimentation phase. It has been used to perform the analysis of 13 actual projects, all of them resulting in transferred products. This experimentation allowed evaluating the adequacy of SETCM for the analysis of problems of varying size, whose final design used different paradigms and even mixed ones. The use of the method by people with different levels of expertise was also evaluated, along with the corresponding learning curve, in order to assess if the method is easy to learn, independently of previous knowledge on other analysis techniques. In addition, the expandability of the analysis models was evaluated, assessing if the technique was adequate for projects organised in incremental steps, in which the analysis of one step grows from the precedent models. The final goal was to assess if SETCM can be used inside an organisation as the preferred analysis method for software development. The obtained results have been very positive, as SETCM has obtained a high degree of fulfilment of the goals stated for the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have identified and characterized CLARP, a caspase-like apoptosis-regulatory protein. Sequence analysis revealed that human CLARP contains two amino-terminal death effector domains fused to a carboxyl-terminal caspase-like domain. The structure and amino acid sequence of CLARP resemble those of caspase-8, caspase-10, and DCP2, a Drosophila melanogaster protein identified in this study. Unlike caspase-8, caspase-10, and DCP2, however, two important residues predicted to be involved in catalysis were lost in the caspase-like domain of CLARP. Analysis with fluorogenic substrates for caspase activity confirmed that CLARP is catalytically inactive. CLARP was found to interact with caspase-8 but not with FADD/MORT-1, an upstream death effector domain-containing protein of the Fas and tumor necrosis factor receptor 1 signaling pathway. Expression of CLARP induced apoptosis, which was blocked by the viral caspase inhibitor p35, dominant negative mutant caspase-8, and the synthetic caspase inhibitor benzyloxycarbonyl-Val-Ala-Asp-(OMe)-fluoromethylketone (zVAD-fmk). Moreover, CLARP augmented the killing ability of caspase-8 and FADD/MORT-1 in mammalian cells. The human clarp gene maps to 2q33. Thus, CLARP represents a regulator of the upstream caspase-8, which may play a role in apoptosis during tissue development and homeostasis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The vibrational energy relaxation of carbon monoxide in the heme pocket of sperm whale myoglobin was studied by using molecular dynamics simulation and normal mode analysis methods. Molecular dynamics trajectories of solvated myoglobin were run at 300 K for both the δ- and ɛ-tautomers of the distal His-64. Vibrational population relaxation times of 335 ± 115 ps for the δ-tautomer and 640 ± 185 ps for the ɛ-tautomer were estimated by using the Landau–Teller model. Normal mode analysis was used to identify those protein residues that act as the primary “doorway” modes in the vibrational relaxation of the oscillator. Although the CO relaxation rates in both the ɛ- and δ-tautomers are similar in magnitude, the simulations predict that the vibrational relaxation of the CO is faster in the δ-tautomer with the distal His playing an important role in the energy relaxation mechanism. Time-resolved mid-IR absorbance measurements were performed on photolyzed carbonmonoxy hemoglobin (Hb13CO). From these measurements, a T1 time of 600 ± 150 ps was determined. The simulation and experimental estimates are compared and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PDB-REPRDB is a database of representative protein chains from the Protein Data Bank (PDB). The previous version of PDB-REPRDB provided 48 representative sets, whose similarity criteria were predetermined, on the WWW. The current version is designed so that the user may obtain a quick selection of representative chains from PDB. The selection of representative chains can be dynamically configured according to the user’s requirement. The WWW interface provides a large degree of freedom in setting parameters, such as cut-off scores of sequence and structural similarity. One can obtain a representative list and classification data of protein chains from the system. The current database includes 20 457 protein chains from PDB entries (August 6, 2000). The system for PDB-REPRDB is available at the Parallel Protein Information Analysis system (PAPIA) WWW server (http://www.rwcp.or.jp/papia/).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gene expression profiling provides powerful analyses of transcriptional responses to cellular perturbation. In contrast to DNA array-based methods, reporter gene technology has been underused for this application. Here we describe a genomewide, genome-registered collection of Escherichia coli bioluminescent reporter gene fusions. DNA sequences from plasmid-borne, random fusions of E. coli chromosomal DNA to a Photorhabdus luminescens luxCDABE reporter allowed precise mapping of each fusion. The utility of this collection covering about 30% of the transcriptional units was tested by analyzing individual fusions representative of heat shock, SOS, OxyR, SoxRS, and cya/crp stress-responsive regulons. Each fusion strain responded as anticipated to environmental conditions known to activate the corresponding regulatory circuit. Thus, the collection mirrors E. coli's transcriptional wiring diagram. This genomewide collection of gene fusions provides an independent test of results from other gene expression analyses. Accordingly, a DNA microarray-based analysis of mitomycin C-treated E. coli indicated elevated expression of expected and unanticipated genes. Selected luxCDABE fusions corresponding to these up-regulated genes were used to confirm or contradict the DNA microarray results. The power of partnering gene fusion and DNA microarray technology to discover promoters and define operons was demonstrated when data from both suggested that a cluster of 20 genes encoding production of type I extracellular polysaccharide in E. coli form a single operon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The yeast two-hybrid system and far-Western protein blot analysis were used to demonstrate dimerization of human double-stranded RNA (dsRNA)-dependent protein kinase (PKR) in vivo and in vitro. A catalytically inactive mutant of PKR with a single amino acid substitution (K296R) was found to dimerize in vivo, and a mutant with a deletion of the catalytic domain of PKR retained the ability to dimerize. In contrast, deletion of the two dsRNA-binding motifs in the N-terminal regulatory domain of PKR abolished dimerization. In vitro dimerization of the dsRNA-binding domain required the presence of dsRNA. These results suggest that the binding of dsRNA by PKR is necessary for dimerization. The mammalian dsRNA-binding protein TRBP, originally identified on the basis of its ability to bind the transactivation region (TAR) of human immunodeficiency virus RNA, also dimerized with itself and with PKR in the yeast assay. Taken together, these results suggest that complexes consisting of different combinations of dsRNA-binding proteins may exist in vivo. Such complexes could mediate differential effects on gene expression and control of cell growth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fluorescent dye-labeled DNA primers have been developed that exploit fluorescence energy transfer (ET) to optimize the absorption and emission properties of the label. These primers carry a fluorescein derivative at the 5' end as a common donor and other fluorescein and rhodamine derivatives attached to a modified thymidine residue within the primer sequence as acceptors. Adjustment of the donor-acceptor spacing through the placement of the modified thymidine in the primer sequence allowed generation of four primers, all having strong absorption at a common excitation wavelength (488 nm) and fluorescence emission maxima of 525, 555, 580, and 605 nm. The ET efficiency of these primers ranges from 65% to 97%, and they exhibit similar electrophoretic mobilities by gel electrophoresis. With argon-ion laser excitation, the fluorescence of the ET primers and of the DNA sequencing fragments generated with ET primers is 2- to 6-fold greater than that of the corresponding primers or fragments labeled with single dyes. The higher fluorescence intensity of the ET primers allows DNA sequencing with one-fourth of the DNA template typically required when using T7 DNA polymerase. With single-stranded M13mp18 DNA as the template, a typical sequencing reaction with ET primers on a commercial sequencer provided DNA sequences with 99.8% accuracy in the first 500 bases. ET primers should be generally useful in the development of other multiplex DNA sequencing and analysis methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes bibliographical references.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background/aims: Chronic infections such as those caused by Chlamydia pneumoniae and periodontopathic bacteria such as Porphyromonas gingivalis have been associated with atherosclerosis, possibly due to cross-reactivity of the immune response to bacterial GroEL with human heat shock protein (hHSP) 60. Methods: We examined the cross-reactivity of anti-GroEL and anti-P. gingivalis antibodies with hHSP60 in atherosclerosis patients and quantified a panel of six pathogens in atheromas. Results: After absorption of plasma samples with hHSP60, there were variable reductions in the levels of anti-GroEL and anti-P. gingivalis antibodies, suggesting that these antibodies cross-reacted with hHSP60. All of the artery specimens were positive for P. gingivalis. Fusobacterium nucleatum, Tannerella forsythia, C. pneumoniae, Helicobacter pylori, and Haemophilus influenzae were found in 84%, 48%, 28%, 4%, and 4% of arteries, respectively. The prevalence of the three periodontopathic microorganisms, P. gingivalis, F. nucleatum and T. forsythia, was significantly higher than that of the remaining three microorganisms. Conclusions: These results support the hypothesis that in some patients, cross-reactivity of the immune response to bacterial HSPs including those of periodontal pathogens, with arterial endothelial cells expressing hHSP60 may be a possible mechanism for the association between atherosclerosis and periodontal infection.