869 resultados para Q and A Relationships
Resumo:
Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.
Resumo:
El daño cerebral adquirido (DCA) es un problema social y sanitario grave, de magnitud creciente y de una gran complejidad diagnóstica y terapéutica. Su elevada incidencia, junto con el aumento de la supervivencia de los pacientes, una vez superada la fase aguda, lo convierten también en un problema de alta prevalencia. En concreto, según la Organización Mundial de la Salud (OMS) el DCA estará entre las 10 causas más comunes de discapacidad en el año 2020. La neurorrehabilitación permite mejorar el déficit tanto cognitivo como funcional y aumentar la autonomía de las personas con DCA. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma donde se puedan diseñar tratamientos que sean intensivos, personalizados, monitorizados y basados en la evidencia. Ya que son estas cuatro características las que aseguran que los tratamientos son eficaces. A diferencia de la mayor parte de las disciplinas médicas, no existen asociaciones de síntomas y signos de la alteración cognitiva que faciliten la orientación terapéutica. Actualmente, los tratamientos de neurorrehabilitación se diseñan en base a los resultados obtenidos en una batería de evaluación neuropsicológica que evalúa el nivel de afectación de cada una de las funciones cognitivas (memoria, atención, funciones ejecutivas, etc.). La línea de investigación en la que se enmarca este trabajo de investigación pretende diseñar y desarrollar un perfil cognitivo basado no sólo en el resultado obtenido en esa batería de test, sino también en información teórica que engloba tanto estructuras anatómicas como relaciones funcionales e información anatómica obtenida de los estudios de imagen. De esta forma, el perfil cognitivo utilizado para diseñar los tratamientos integra información personalizada y basada en la evidencia. Las técnicas de neuroimagen representan una herramienta fundamental en la identificación de lesiones para la generación de estos perfiles cognitivos. La aproximación clásica utilizada en la identificación de lesiones consiste en delinear manualmente regiones anatómicas cerebrales. Esta aproximación presenta diversos problemas relacionados con inconsistencias de criterio entre distintos clínicos, reproducibilidad y tiempo. Por tanto, la automatización de este procedimiento es fundamental para asegurar una extracción objetiva de información. La delineación automática de regiones anatómicas se realiza mediante el registro tanto contra atlas como contra otros estudios de imagen de distintos sujetos. Sin embargo, los cambios patológicos asociados al DCA están siempre asociados a anormalidades de intensidad y/o cambios en la localización de las estructuras. Este hecho provoca que los algoritmos de registro tradicionales basados en intensidad no funcionen correctamente y requieran la intervención del clínico para seleccionar ciertos puntos (que en esta tesis hemos denominado puntos singulares). Además estos algoritmos tampoco permiten que se produzcan deformaciones grandes deslocalizadas. Hecho que también puede ocurrir ante la presencia de lesiones provocadas por un accidente cerebrovascular (ACV) o un traumatismo craneoencefálico (TCE). Esta tesis se centra en el diseño, desarrollo e implementación de una metodología para la detección automática de estructuras lesionadas que integra algoritmos cuyo objetivo principal es generar resultados que puedan ser reproducibles y objetivos. Esta metodología se divide en cuatro etapas: pre-procesado, identificación de puntos singulares, registro y detección de lesiones. Los trabajos y resultados alcanzados en esta tesis son los siguientes: Pre-procesado. En esta primera etapa el objetivo es homogeneizar todos los datos de entrada con el objetivo de poder extraer conclusiones válidas de los resultados obtenidos. Esta etapa, por tanto, tiene un gran impacto en los resultados finales. Se compone de tres operaciones: eliminación del cráneo, normalización en intensidad y normalización espacial. Identificación de puntos singulares. El objetivo de esta etapa es automatizar la identificación de puntos anatómicos (puntos singulares). Esta etapa equivale a la identificación manual de puntos anatómicos por parte del clínico, permitiendo: identificar un mayor número de puntos lo que se traduce en mayor información; eliminar el factor asociado a la variabilidad inter-sujeto, por tanto, los resultados son reproducibles y objetivos; y elimina el tiempo invertido en el marcado manual de puntos. Este trabajo de investigación propone un algoritmo de identificación de puntos singulares (descriptor) basado en una solución multi-detector y que contiene información multi-paramétrica: espacial y asociada a la intensidad. Este algoritmo ha sido contrastado con otros algoritmos similares encontrados en el estado del arte. Registro. En esta etapa se pretenden poner en concordancia espacial dos estudios de imagen de sujetos/pacientes distintos. El algoritmo propuesto en este trabajo de investigación está basado en descriptores y su principal objetivo es el cálculo de un campo vectorial que permita introducir deformaciones deslocalizadas en la imagen (en distintas regiones de la imagen) y tan grandes como indique el vector de deformación asociado. El algoritmo propuesto ha sido comparado con otros algoritmos de registro utilizados en aplicaciones de neuroimagen que se utilizan con estudios de sujetos control. Los resultados obtenidos son prometedores y representan un nuevo contexto para la identificación automática de estructuras. Identificación de lesiones. En esta última etapa se identifican aquellas estructuras cuyas características asociadas a la localización espacial y al área o volumen han sido modificadas con respecto a una situación de normalidad. Para ello se realiza un estudio estadístico del atlas que se vaya a utilizar y se establecen los parámetros estadísticos de normalidad asociados a la localización y al área. En función de las estructuras delineadas en el atlas, se podrán identificar más o menos estructuras anatómicas, siendo nuestra metodología independiente del atlas seleccionado. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la identificación automática de lesiones utilizando estudios de imagen médica estructural, concretamente estudios de resonancia magnética. Basándose en estos cimientos, se han abrir nuevos campos de investigación que contribuyan a la mejora en la detección de lesiones. ABSTRACT Brain injury constitutes a serious social and health problem of increasing magnitude and of great diagnostic and therapeutic complexity. Its high incidence and survival rate, after the initial critical phases, makes it a prevalent problem that needs to be addressed. In particular, according to the World Health Organization (WHO), brain injury will be among the 10 most common causes of disability by 2020. Neurorehabilitation improves both cognitive and functional deficits and increases the autonomy of brain injury patients. The incorporation of new technologies to the neurorehabilitation tries to reach a new paradigm focused on designing intensive, personalized, monitored and evidence-based treatments. Since these four characteristics ensure the effectivity of treatments. Contrary to most medical disciplines, it is not possible to link symptoms and cognitive disorder syndromes, to assist the therapist. Currently, neurorehabilitation treatments are planned considering the results obtained from a neuropsychological assessment battery, which evaluates the functional impairment of each cognitive function (memory, attention, executive functions, etc.). The research line, on which this PhD falls under, aims to design and develop a cognitive profile based not only on the results obtained in the assessment battery, but also on theoretical information that includes both anatomical structures and functional relationships and anatomical information obtained from medical imaging studies, such as magnetic resonance. Therefore, the cognitive profile used to design these treatments integrates information personalized and evidence-based. Neuroimaging techniques represent an essential tool to identify lesions and generate this type of cognitive dysfunctional profiles. Manual delineation of brain anatomical regions is the classical approach to identify brain anatomical regions. Manual approaches present several problems related to inconsistencies across different clinicians, time and repeatability. Automated delineation is done by registering brains to one another or to a template. However, when imaging studies contain lesions, there are several intensity abnormalities and location alterations that reduce the performance of most of the registration algorithms based on intensity parameters. Thus, specialists may have to manually interact with imaging studies to select landmarks (called singular points in this PhD) or identify regions of interest. These two solutions have the same inconvenient than manual approaches, mentioned before. Moreover, these registration algorithms do not allow large and distributed deformations. This type of deformations may also appear when a stroke or a traumatic brain injury (TBI) occur. This PhD is focused on the design, development and implementation of a new methodology to automatically identify lesions in anatomical structures. This methodology integrates algorithms whose main objective is to generate objective and reproducible results. It is divided into four stages: pre-processing, singular points identification, registration and lesion detection. Pre-processing stage. In this first stage, the aim is to standardize all input data in order to be able to draw valid conclusions from the results. Therefore, this stage has a direct impact on the final results. It consists of three steps: skull-stripping, spatial and intensity normalization. Singular points identification. This stage aims to automatize the identification of anatomical points (singular points). It involves the manual identification of anatomical points by the clinician. This automatic identification allows to identify a greater number of points which results in more information; to remove the factor associated to inter-subject variability and thus, the results are reproducible and objective; and to eliminate the time spent on manual marking. This PhD proposed an algorithm to automatically identify singular points (descriptor) based on a multi-detector approach. This algorithm contains multi-parametric (spatial and intensity) information. This algorithm has been compared with other similar algorithms found on the state of the art. Registration. The goal of this stage is to put in spatial correspondence two imaging studies of different subjects/patients. The algorithm proposed in this PhD is based on descriptors. Its main objective is to compute a vector field to introduce distributed deformations (changes in different imaging regions), as large as the deformation vector indicates. The proposed algorithm has been compared with other registration algorithms used on different neuroimaging applications which are used with control subjects. The obtained results are promising and they represent a new context for the automatic identification of anatomical structures. Lesion identification. This final stage aims to identify those anatomical structures whose characteristics associated to spatial location and area or volume has been modified with respect to a normal state. A statistical study of the atlas to be used is performed to establish which are the statistical parameters associated to the normal state. The anatomical structures that may be identified depend on the selected anatomical structures identified on the atlas. The proposed methodology is independent from the selected atlas. Overall, this PhD corroborates the investigated research hypotheses regarding the automatic identification of lesions based on structural medical imaging studies (resonance magnetic studies). Based on these foundations, new research fields to improve the automatic identification of lesions in brain injury can be proposed.
Resumo:
A aprendizagem formal e tradicional tem dado lugar a um cenário desafiador no qual educador e educando não comungam do mesmo espaço físico. A Educação a Distância (EAD), ainda é vista como uma solução que agrega cada vez mais alunos de diferentes idades que desejam uma graduação de ensino superior ou a continuidade dela. A pesquisa com o título: “O estudante da EAD (educação a distância): um estudo de perfil e interação geracional” propõe conhecer as características do perfil atual do estudante da EAD, abordando o diálogo entre as gerações no ambiente social escolar. O enfoque da pesquisa é qualitativa, exploratória e descritiva com dados que foram coletados através de entrevista com 08 alunos das gerações X e Y para assim entender se este perfil tem sido renovado com alunos mais jovens, do que a faixa etária de 25 a 45 anos. O resultado demonstra que alunos na faixa de 17 a 24 anos a cada ano aumentam 1% das matrículas. Já a faixa de 25 a 45 anos prevalece com 70% das matrículas. Portanto, este resultado revela que o perfil do aluno EAD ainda é o do jovem adulto, para adulto mais experiente, que busca a graduação com o propósito de progressão no ambiente profissional. As duas gerações citadas geração X e geração Y, mesmo em contextos históricos diferenciados de valores, crenças e comportamentos participam atualmente de uma transformação social que contempla os meios de produção do trabalho, a formação educacional e as relações sociais. O diálogo intergeracional direciona a um aprendizado compartilhado, participativo na troca de experiências mutuas. Para a geração X o jovem atual não é mais nomeado como o que precisa escutar e aprender, mas tem muito a partilhar, principalmente diante da facilidade com os meios tecnológicos. E para a geração Y, na partilha não há barreiras de idade, mas a segurança de interagir e se comunicar diante da troca de experiências
Resumo:
As condições inadequadas vivenciadas nas organizações afligem não só os trabalhadores da iniciativa privada, pois são igualmente encontradas no segmento estatal, contrariando a expectativa de que o aparato governamental eliminaria as condições insalubres e criaria outras melhores nas quais prevalecesse à promoção de saúde. Diante desse panorama questionou-se porque, uma vez que, pelo menos do ponto de vista da sociedade leiga, esses servidores estão submetidos a condições privilegiadas de trabalho. O presente estudo objetivou identificar e descrever possíveis relações entre o clima organizacional e o burnout em servidores públicos de uma instituição federal de ensino. Objetivou-se ainda descrever o clima organizacional predominante. A pesquisa realizada teve cunho quantitativo, tipo estudo de caso e exploratória. A coleta de dados deu-se por meio das escalas ECO (escala de clima organizacional), ECB (escala de caracterização do burnout) e um questionário sociodemográfico, todos os instrumentos autoaplicáveis eletronicamente disponíveis à instituição. Participaram do estudo 201 servidores públicos federais, com idade média de 37 anos, majoritariamente de nível superior e casados. Os resultados revelaram que cerca de um quarto dos participantes raramente experimentaram burnout, no entanto outra quarta parte deles frequentemente experimentaram altos níveis de burnout, resultado bastante expressivo. Os servidores perceberam clima organizacional mediano, destacando-se a boa coesão entre os colegas de trabalho e a percepção de baixa recompensa. Merece destaque a grande dispersão entre as percepções de clima, o que permite inferir haver subclimas não identificados nesta investigação, possivelmente ocasionados por uma força de clima fraca e pela participação dos servidores de unidades de ensino geograficamente distintas, geridas por gestores locais com relativa autonomia. Os resultados dos cálculos de correlação revelaram que, quanto menos os participantes percebem apoio da chefia e da organização, coesão entre colegas, e mais controle/pressão, mais exaustos se sentem, mais desumanizam as pessoas com quem tratam e mais se decepcionam no trabalho e vice-versa. Conforto físico menor está associado a maior desumanização e a mais decepção no trabalho e vice-versa; e que controle/pressão, relaciona-se positiva e fracamente com desumanização e vice-versa. Desta forma, a hipótese de que existe associação entre burnout e clima organizacional foi confirmada. Os resultados também revelaram que os servidores com burnout, perceberam pior clima organizacional que os seus pares sem burnout, confirmando a segunda hipótese. Esses servidores também se mostraram neutros quanto à percepção de apoio da chefia e conforto físico; não percebem controle pressão, nem recompensa; todavia percebem coesão entre os colegas. Esses resultados sugerem que os participantes têm se apoiado nessas relações para suportar a indiferença e ausência de estímulos experimentados no trabalho. Os resultados obtidos nesse estudo permitiram concluir que o clima organizacional é fraco, provavelmente influenciado por uma cultura organizacional fraca, explicando a heterogeneidade da percepção do clima organizacional pelos servidores. Além disso, embora haja burnout entre poucos participantes, há que se atentar que cerca de um quarto deles, encontra-se acometido desta síndrome e isto poderá contagiar os demais.
Resumo:
Receptors coupled to heterotrimeric G proteins can effectively stimulate growth promoting pathways in a large variety of cell types, and if persistently activated, these receptors can also behave as dominant-acting oncoproteins. Consistently, activating mutations for G proteins of the Gαs and Gαi2 families were found in human tumors; and members of the Gαq and Gα12 families are fully transforming when expressed in murine fibroblasts. In an effort aimed to elucidate the molecular events involved in proliferative signaling through heterotrimeric G proteins we have focused recently on gene expression regulation. Using NIH 3T3 fibroblasts expressing m1 muscarinic acetylcholine receptors as a model system, we have observed that activation of this transforming G protein-coupled receptors induces the rapid expression of a variety of early responsive genes, including the c-fos protooncogene. One of the c-fos promoter elements, the serum response element (SRE), plays a central regulatory role, and activation of SRE-dependent transcription has been found to be regulated by several proteins, including the serum response factor and the ternary complex factor. With the aid of reporter plasmids for gene expression, we observed here that stimulation of m1 muscarinic acetylcholine receptors potently induced SRE-driven reporter gene activity in NIH 3T3 cells. In these cells, only the Gα12 family of heterotrimeric G protein α subunits strongly induced the SRE, while Gβ1γ2 dimers activated SRE to a more limited extent. Furthermore, our study provides strong evidence that m1, Gα12 and the small GTP-binding protein RhoA are components of a novel signal transduction pathway that leads to the ternary complex factor-independent transcriptional activation of the SRE and to cellular transformation.
Resumo:
Structure–function studies of rhodopsin kinase (RK; EC 2.7.1.125) require a variety of mutants. Therefore, there is need for a suitable system for the expression of RK mutant genes. Here we report on a study of expression of the RK gene in baculovirus-infected Sf21 cells and characterization of the enzyme produced as purified to near homogeneity. Particular attention has been paid to the post-translational modifications, autophosphorylation and isoprenylation, found in the native bovine RK. The protein produced has been purified using, successively, heparin-Sepharose, Mono Q, and Mono S FPLC (fast protein liquid chromatography) and was obtained in amounts of about 2 mg from 1 liter of cell culture. The enzyme from the last step of purification was obtained in two main fractions that differ in the level of phosphorylation. The protein peak eluted first carries two phosphate groups per protein, whereas the second protein peak is monophosphorylated. Further, while both peaks are isoprenylated, the isoprenyl groups consist of mixtures of C5, C10, C15, and C20 isoprenyl moieties. From these results, we conclude that the above expression system is suitable for some but not all aspects of structure–function studies.
Resumo:
Brefeldin A (BFA) inhibited the exchange of ADP ribosylation factor (ARF)-bound GDP for GTP by a Golgi-associated guanine nucleotide-exchange protein (GEP) [Helms, J. B. & Rothman, J. E. (1992) Nature (London) 360, 352–354; Donaldson, J. G., Finazzi, D. & Klausner, R. D. (1992) Nature (London) 360, 350–352]. Cytosolic ARF GEP was also inhibited by BFA, but after purification from bovine brain and rat spleen, it was no longer BFA-sensitive [Tsai, S.-C., Adamik, R., Moss, J. & Vaughan, M. (1996) Proc. Natl. Acad. Sci. USA 93, 305–309]. We describe here purification from bovine brain cytosol of a BFA-inhibited GEP. After chromatography on DEAE–Sephacel, hydroxylapatite, and Mono Q and precipitation at pH 5.8, GEP was eluted from Superose 6 as a large molecular weight complex at the position of thyroglobulin (≈670 kDa). After SDS/PAGE of samples from column fractions, silver-stained protein bands of ≈190 and 200 kDa correlated with activity. BFA-inhibited GEP activity of the 200-kDa protein was demonstrated following electroelution from the gel and renaturation by dialysis. Four tryptic peptides from the 200-kDa protein had amino acid sequences that were 47% identical to sequences in Sec7 from Saccharomyces cerevisiae (total of 51 amino acids), consistent with the view that the BFA-sensitive 200-kDa protein may be a mammalian counterpart of Sec7 that plays a similar role in cellular vesicular transport and Sec7 may be a GEP for one or more yeast ARFs.
Resumo:
The b locus encodes a transcription factor that regulates the expression of genes that produce purple anthocyanin pigment. Different b alleles are expressed in distinct tissues, causing tissue-specific anthocyanin production. Understanding how phenotypic diversity is produced and maintained at the b locus should provide models for how other regulatory genes, including those that influence morphological traits and development, evolve. We have investigated how different levels and patterns of pigmentation have evolved by determining the phenotypic and evolutionary relationships between 18 alleles that represent the diversity of b alleles in Zea mays. Although most of these alleles have few phenotypic differences, five alleles have very distinct tissue-specific patterns of pigmentation. Superimposing the phenotypes on the molecular phylogeny reveals that the alleles with strong and distinctive patterns of expression are closely related to alleles with weak expression, implying that the distinctive patterns have arisen recently. We have identified apparent insertions in three of the five phenotypically distinct alleles, and the fourth has unique upstream restriction fragment length polymorphisms relative to closely related alleles. The insertion in B-Peru has been shown to be responsible for its unique expression and, in the other two alleles, the presence of the insertion correlates with the phenotype. These results suggest that major changes in gene expression are probably the result of large-scale changes in DNA sequence and/or structure most likely mediated by transposable elements.
Resumo:
GAIP (G Alpha Interacting Protein) is a member of the recently described RGS (Regulators of G-protein Signaling) family that was isolated by interaction cloning with the heterotrimeric G-protein Gαi3 and was recently shown to be a GTPase-activating protein (GAP). In AtT-20 cells stably expressing GAIP, we found that GAIP is membrane-anchored and faces the cytoplasm, because it was not released by sodium carbonate treatment but was digested by proteinase K. When Cos cells were transiently transfected with GAIP and metabolically labeled with [35S]methionine, two pools of GAIP—a soluble and a membrane-anchored pool—were found. Since the N terminus of GAIP contains a cysteine string motif and cysteine string proteins are heavily palmitoylated, we investigated the possibility that membrane-anchored GAIP might be palmitoylated. We found that after labeling with [3H]palmitic acid, the membrane-anchored pool but not the soluble pool was palmitoylated. In the yeast two-hybrid system, GAIP was found to interact specifically with members of the Gαi subfamily, Gαi1, Gαi2, Gαi3, Gαz, and Gαo, but not with members of other Gα subfamilies, Gαs, Gαq, and Gα12/13. The C terminus of Gαi3 is important for binding because a 10-aa C-terminal truncation and a point mutant of Gαi3 showed significantly diminished interaction. GAIP interacted preferentially with the activated (GTP) form of Gαi3, which is in keeping with its GAP activity. We conclude that GAIP is a membrane-anchored GAP with a cysteine string motif. This motif, present in cysteine string proteins found on synaptic vesicles, pancreatic zymogen granules, and chromaffin granules, suggests GAIP’s possible involvement in membrane trafficking.
Resumo:
Relationships were examined between spatial learning and hippocampal concentrations of the α, β2, and γ isoforms of protein kinase C (PKC), an enzyme implicated in neuronal plasticity and memory formation. Concentrations of PKC were determined for individual 6-month-old (n = 13) and 24-month-old (n = 27) male Long–Evans rats trained in the water maze on a standard place-learning task and a transfer task designed for rapid acquisition. The results showed significant relationships between spatial learning and the amount of PKC among individual subjects, and those relationships differed according to age, isoform, and subcellular fraction. Among 6-month-old rats, those with the best spatial memory were those with the highest concentrations of PKCγ in the particulate fraction and of PKCβ2 in the soluble fraction. Aged rats had increased hippocampal PKCγ concentrations in both subcellular fractions in comparison with young rats, and memory impairment was correlated with higher PKCγ concentrations in the soluble fraction. No age difference or correlations with behavior were found for concentrations of PKCγ in a comparison structure, the neostriatum, or for PKCα in the hippocampus. Relationships between spatial learning and hippocampal concentrations of calcium-dependent PKC are isoform-specific. Moreover, age-related spatial memory impairment is associated with altered subcellular concentrations of PKCγ and may be indicative of deficient signal transduction and neuronal plasticity in the hippocampal formation.
Resumo:
The Plasmodium falciparum Genome Database (http://PlasmoDB.org) integrates sequence information, automated analyses and annotation data emerging from the P.falciparum genome sequencing consortium. To date, raw sequence coverage is available for >90% of the genome, and two chromosomes have been finished and annotated. Data in PlasmoDB are organized by chromosome (1–14), and can be accessed using a variety of tools for graphical and text-based browsing or downloaded in various file formats. The GUS (Genomics Unified Schema) implementation of PlasmoDB provides a multi-species genomic relational database, incorporating data from human and mouse, as well as P.falciparum. The relational schema uses a highly structured format to accommodate diverse data sets related to genomic sequence and gene expression. Tools have been designed to facilitate complex biological queries, including many that are specific to Plasmodium parasites and malaria as a disease. Additional projects seek to integrate genomic information with the rich data sets now becoming available for RNA transcription, protein expression, metabolic pathways, genetic and physical mapping, antigenic and population diversity, and phylogenetic relationships with other apicomplexan parasites. The overall goal of PlasmoDB is to facilitate Internet- and CD-ROM-based access to both finished and unfinished sequence information by the global malaria research community.
Resumo:
Early in the development of plant evolutionary biology, genetic drift, fluctuations in population size, and isolation were identified as critical processes that affect the course of evolution in plant species. Attempts to assess these processes in natural populations became possible only with the development of neutral genetic markers in the 1960s. More recently, the application of historically ordered neutral molecular variation (within the conceptual framework of coalescent theory) has allowed a reevaluation of these microevolutionary processes. Gene genealogies trace the evolutionary relationships among haplotypes (alleles) with populations. Processes such as selection, fluctuation in population size, and population substructuring affect the geographical and genealogical relationships among these alleles. Therefore, examination of these genealogical data can provide insights into the evolutionary history of a species. For example, studies of Arabidopsis thaliana have suggested that this species underwent rapid expansion, with populations showing little genetic differentiation. The new discipline of phylogeography examines the distribution of allele genealogies in an explicit geographical context. Phylogeographic studies of plants have documented the recolonization of European tree species from refugia subsequent to Pleistocene glaciation, and such studies have been instructive in understanding the origin and domestication of the crop cassava. Currently, several technical limitations hinder the widespread application of a genealogical approach to plant evolutionary studies. However, as these technical issues are solved, a genealogical approach holds great promise for understanding these previously elusive processes in plant evolution.
Resumo:
Estimation of evolutionary distances has always been a major issue in the study of molecular evolution because evolutionary distances are required for estimating the rate of evolution in a gene, the divergence dates between genes or organisms, and the relationships among genes or organisms. Other closely related issues are the estimation of the pattern of nucleotide substitution, the estimation of the degree of rate variation among sites in a DNA sequence, and statistical testing of the molecular clock hypothesis. Mathematical treatments of these problems are considerably simplified by the assumption of a stationary process in which the nucleotide compositions of the sequences under study have remained approximately constant over time, and there now exist fairly extensive studies of stationary models of nucleotide substitution, although some problems remain to be solved. Nonstationary models are much more complex, but significant progress has been recently made by the development of the paralinear and LogDet distances. This paper reviews recent studies on the above issues and reports results on correcting the estimation bias of evolutionary distances, the estimation of the pattern of nucleotide substitution, and the estimation of rate variation among the sites in a sequence.
Resumo:
Thermoluminescence (TL) signals were recorded from grana stacks, margins, and stroma lamellae from fractionated, dark-adapted thylakoid membranes of spinach (Spinacia oleracea L.) in the absence and in the presence of 2,6-dichlorphenylindophenol (DCMU). In the absence of DCMU, the TL signal from grana fractions consisted of a homogenous B-band, which originates from recombination of the semi-quinone QB− with the S2 state of the water-splitting complex and reflects active photosystem II (PSII). In the presence of DCMU, the B-band was replaced by the Q-band, which originates from an S2QA− recombination. Margin fractions mainly showed two TL-bands, the B- and C-bands, at approximately 50°C in the absence of DCMU, and Q- and C-bands in the presence of DCMU. The C-band is ascribed to a TyrD+-QA− recombination. In the absence of DCMU, the fractions of stromal lamellae mainly gave rise to a TL emission at 42°C. The intensity of this band was independent of the number of excitation flashes and was shifted to higher temperatures (52°C) after the addition of DCMU. Based on these observations, this band was considered to be a C-band. After photoinhibitory light treatment of uncoupled thylakoid membranes, the TL intensities of the B- and Q-bands decreased, whereas the intensity at 45°C (C-band) slightly increased. It is proposed that the 42 to 52°C band that was observed in marginal and stromal lamellae and in photoinhibited thylakoid membranes reflects inactive PSII centers that are assumed to be equivalent to inactive PSII QB-nonreducing centers.
Sequence similarity analysis of Escherichia coli proteins: functional and evolutionary implications.
Resumo:
A computer analysis of 2328 protein sequences comprising about 60% of the Escherichia coli gene products was performed using methods for database screening with individual sequences and alignment blocks. A high fraction of E. coli proteins--86%--shows significant sequence similarity to other proteins in current databases; about 70% show conservation at least at the level of distantly related bacteria, and about 40% contain ancient conserved regions (ACRs) shared with eukaryotic or Archaeal proteins. For > 90% of the E. coli proteins, either functional information or sequence similarity, or both, are available. Forty-six percent of the E. coli proteins belong to 299 clusters of paralogs (intraspecies homologs) defined on the basis of pairwise similarity. Another 10% could be included in 70 superclusters using motif detection methods. The majority of the clusters contain only two to four members. In contrast, nearly 25% of all E. coli proteins belong to the four largest superclusters--namely, permeases, ATPases and GTPases with the conserved "Walker-type" motif, helix-turn-helix regulatory proteins, and NAD(FAD)-binding proteins. We conclude that bacterial protein sequences generally are highly conserved in evolution, with about 50% of all ACR-containing protein families represented among the E. coli gene products. With the current sequence databases and methods of their screening, computer analysis yields useful information on the functions and evolutionary relationships of the vast majority of genes in a bacterial genome. Sequence similarity with E. coli proteins allows the prediction of functions for a number of important eukaryotic genes, including several whose products are implicated in human diseases.