926 resultados para Unicode Common Locale Data Repository
Resumo:
Extracting opinions and emotions from text is becoming increasingly important, especially since the advent of micro-blogging and social networking. Opinion mining is particularly popular and now gathers many public services, datasets and lexical resources. Unfortunately, there are few available lexical and semantic resources for emotion recognition that could foster the development of new emotion aware services and applications. The diversity of theories of emotion and the absence of a common vocabulary are two of the main barriers to the development of such resources. This situation motivated the creation of Onyx, a semantic vocabulary of emotions with a focus on lexical resources and emotion analysis services. It follows a linguistic Linked Data approach, it is aligned with the Provenance Ontology, and it has been integrated with the Lexicon Model for Ontologies (lemon), a popular RDF model for representing lexical entries. This approach also means a new and interesting way to work with different theories of emotion. As part of this work, Onyx has been aligned with EmotionML and WordNet-Affect.
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
La diabetes mellitus es un trastorno en la metabolización de los carbohidratos, caracterizado por la nula o insuficiente segregación de insulina (hormona producida por el páncreas), como resultado del mal funcionamiento de la parte endocrina del páncreas, o de una creciente resistencia del organismo a esta hormona. Esto implica, que tras el proceso digestivo, los alimentos que ingerimos se transforman en otros compuestos químicos más pequeños mediante los tejidos exocrinos. La ausencia o poca efectividad de esta hormona polipéptida, no permite metabolizar los carbohidratos ingeridos provocando dos consecuencias: Aumento de la concentración de glucosa en sangre, ya que las células no pueden metabolizarla; consumo de ácidos grasos mediante el hígado, liberando cuerpos cetónicos para aportar la energía a las células. Esta situación expone al enfermo crónico, a una concentración de glucosa en sangre muy elevada, denominado hiperglucemia, la cual puede producir a medio o largo múltiples problemas médicos: oftalmológicos, renales, cardiovasculares, cerebrovasculares, neurológicos… La diabetes representa un gran problema de salud pública y es la enfermedad más común en los países desarrollados por varios factores como la obesidad, la vida sedentaria, que facilitan la aparición de esta enfermedad. Mediante el presente proyecto trabajaremos con los datos de experimentación clínica de pacientes con diabetes de tipo 1, enfermedad autoinmune en la que son destruidas las células beta del páncreas (productoras de insulina) resultando necesaria la administración de insulina exógena. Dicho esto, el paciente con diabetes tipo 1 deberá seguir un tratamiento con insulina administrada por la vía subcutánea, adaptado a sus necesidades metabólicas y a sus hábitos de vida. Para abordar esta situación de regulación del control metabólico del enfermo, mediante una terapia de insulina, no serviremos del proyecto “Páncreas Endocrino Artificial” (PEA), el cual consta de una bomba de infusión de insulina, un sensor continuo de glucosa, y un algoritmo de control en lazo cerrado. El objetivo principal del PEA es aportar al paciente precisión, eficacia y seguridad en cuanto a la normalización del control glucémico y reducción del riesgo de hipoglucemias. El PEA se instala mediante vía subcutánea, por lo que, el retardo introducido por la acción de la insulina, el retardo de la medida de glucosa, así como los errores introducidos por los sensores continuos de glucosa cuando, se descalibran dificultando el empleo de un algoritmo de control. Llegados a este punto debemos modelar la glucosa del paciente mediante sistemas predictivos. Un modelo, es todo aquel elemento que nos permita predecir el comportamiento de un sistema mediante la introducción de variables de entrada. De este modo lo que conseguimos, es una predicción de los estados futuros en los que se puede encontrar la glucosa del paciente, sirviéndonos de variables de entrada de insulina, ingesta y glucosa ya conocidas, por ser las sucedidas con anterioridad en el tiempo. Cuando empleamos el predictor de glucosa, utilizando parámetros obtenidos en tiempo real, el controlador es capaz de indicar el nivel futuro de la glucosa para la toma de decisones del controlador CL. Los predictores que se están empleando actualmente en el PEA no están funcionando correctamente por la cantidad de información y variables que debe de manejar. Data Mining, también referenciado como Descubrimiento del Conocimiento en Bases de Datos (Knowledge Discovery in Databases o KDD), ha sido definida como el proceso de extracción no trivial de información implícita, previamente desconocida y potencialmente útil. Todo ello, sirviéndonos las siguientes fases del proceso de extracción del conocimiento: selección de datos, pre-procesado, transformación, minería de datos, interpretación de los resultados, evaluación y obtención del conocimiento. Con todo este proceso buscamos generar un único modelo insulina glucosa que se ajuste de forma individual a cada paciente y sea capaz, al mismo tiempo, de predecir los estados futuros glucosa con cálculos en tiempo real, a través de unos parámetros introducidos. Este trabajo busca extraer la información contenida en una base de datos de pacientes diabéticos tipo 1 obtenidos a partir de la experimentación clínica. Para ello emplearemos técnicas de Data Mining. Para la consecución del objetivo implícito a este proyecto hemos procedido a implementar una interfaz gráfica que nos guía a través del proceso del KDD (con información gráfica y estadística) de cada punto del proceso. En lo que respecta a la parte de la minería de datos, nos hemos servido de la denominada herramienta de WEKA, en la que a través de Java controlamos todas sus funciones, para implementarlas por medio del programa creado. Otorgando finalmente, una mayor potencialidad al proyecto con la posibilidad de implementar el servicio de los dispositivos Android por la potencial capacidad de portar el código. Mediante estos dispositivos y lo expuesto en el proyecto se podrían implementar o incluso crear nuevas aplicaciones novedosas y muy útiles para este campo. Como conclusión del proyecto, y tras un exhaustivo análisis de los resultados obtenidos, podemos apreciar como logramos obtener el modelo insulina-glucosa de cada paciente. ABSTRACT. The diabetes mellitus is a metabolic disorder, characterized by the low or none insulin production (a hormone produced by the pancreas), as a result of the malfunctioning of the endocrine pancreas part or by an increasing resistance of the organism to this hormone. This implies that, after the digestive process, the food we consume is transformed into smaller chemical compounds, through the exocrine tissues. The absence or limited effectiveness of this polypeptide hormone, does not allow to metabolize the ingested carbohydrates provoking two consequences: Increase of the glucose concentration in blood, as the cells are unable to metabolize it; fatty acid intake through the liver, releasing ketone bodies to provide energy to the cells. This situation exposes the chronic patient to high blood glucose levels, named hyperglycemia, which may cause in the medium or long term multiple medical problems: ophthalmological, renal, cardiovascular, cerebrum-vascular, neurological … The diabetes represents a great public health problem and is the most common disease in the developed countries, by several factors such as the obesity or sedentary life, which facilitate the appearance of this disease. Through this project we will work with clinical experimentation data of patients with diabetes of type 1, autoimmune disease in which beta cells of the pancreas (producers of insulin) are destroyed resulting necessary the exogenous insulin administration. That said, the patient with diabetes type 1 will have to follow a treatment with insulin, administered by the subcutaneous route, adapted to his metabolic needs and to his life habits. To deal with this situation of metabolic control regulation of the patient, through an insulin therapy, we shall be using the “Endocrine Artificial Pancreas " (PEA), which consists of a bomb of insulin infusion, a constant glucose sensor, and a control algorithm in closed bow. The principal aim of the PEA is providing the patient precision, efficiency and safety regarding the normalization of the glycemic control and hypoglycemia risk reduction". The PEA establishes through subcutaneous route, consequently, the delay introduced by the insulin action, the delay of the glucose measure, as well as the mistakes introduced by the constant glucose sensors when, decalibrate, impede the employment of an algorithm of control. At this stage we must shape the patient glucose levels through predictive systems. A model is all that element or set of elements which will allow us to predict the behavior of a system by introducing input variables. Thus what we obtain, is a prediction of the future stages in which it is possible to find the patient glucose level, being served of input insulin, ingestion and glucose variables already known, for being the ones happened previously in the time. When we use the glucose predictor, using obtained real time parameters, the controller is capable of indicating the future level of the glucose for the decision capture CL controller. The predictors that are being used nowadays in the PEA are not working correctly for the amount of information and variables that it need to handle. Data Mining, also indexed as Knowledge Discovery in Databases or KDD, has been defined as the not trivial extraction process of implicit information, previously unknown and potentially useful. All this, using the following phases of the knowledge extraction process: selection of information, pre- processing, transformation, data mining, results interpretation, evaluation and knowledge acquisition. With all this process we seek to generate the unique insulin glucose model that adjusts individually and in a personalized way for each patient form and being capable, at the same time, of predicting the future conditions with real time calculations, across few input parameters. This project of end of grade seeks to extract the information contained in a database of type 1 diabetics patients, obtained from clinical experimentation. For it, we will use technologies of Data Mining. For the attainment of the aim implicit to this project we have proceeded to implement a graphical interface that will guide us across the process of the KDD (with graphical and statistical information) of every point of the process. Regarding the data mining part, we have been served by a tool called WEKA's tool called, in which across Java, we control all of its functions to implement them by means of the created program. Finally granting a higher potential to the project with the possibility of implementing the service for Android devices, porting the code. Through these devices and what has been exposed in the project they might help or even create new and very useful applications for this field. As a conclusion of the project, and after an exhaustive analysis of the obtained results, we can show how we achieve to obtain the insulin–glucose model for each patient.
Resumo:
Internet está evolucionando hacia la conocida como Live Web. En esta nueva etapa en la evolución de Internet, se pone al servicio de los usuarios multitud de streams de datos sociales. Gracias a estas fuentes de datos, los usuarios han pasado de navegar por páginas web estáticas a interacturar con aplicaciones que ofrecen contenido personalizado, basada en sus preferencias. Cada usuario interactúa a diario con multiples aplicaciones que ofrecen notificaciones y alertas, en este sentido cada usuario es una fuente de eventos, y a menudo los usuarios se sienten desbordados y no son capaces de procesar toda esa información a la carta. Para lidiar con esta sobresaturación, han aparecido múltiples herramientas que automatizan las tareas más habituales, desde gestores de bandeja de entrada, gestores de alertas en redes sociales, a complejos CRMs o smart-home hubs. La contrapartida es que aunque ofrecen una solución a problemas comunes, no pueden adaptarse a las necesidades de cada usuario ofreciendo una solucion personalizada. Los Servicios de Automatización de Tareas (TAS de sus siglas en inglés) entraron en escena a partir de 2012 para dar solución a esta liminación. Dada su semejanza, estos servicios también son considerados como un nuevo enfoque en la tecnología de mash-ups pero centra en el usuarios. Los usuarios de estas plataformas tienen la capacidad de interconectar servicios, sensores y otros aparatos con connexión a internet diseñando las automatizaciones que se ajustan a sus necesidades. La propuesta ha sido ámpliamante aceptada por los usuarios. Este hecho ha propiciado multitud de plataformas que ofrecen servicios TAS entren en escena. Al ser un nuevo campo de investigación, esta tesis presenta las principales características de los TAS, describe sus componentes, e identifica las dimensiones fundamentales que los defines y permiten su clasificación. En este trabajo se acuña el termino Servicio de Automatización de Tareas (TAS) dando una descripción formal para estos servicios y sus componentes (llamados canales), y proporciona una arquitectura de referencia. De igual forma, existe una falta de herramientas para describir servicios de automatización, y las reglas de automatización. A este respecto, esta tesis propone un modelo común que se concreta en la ontología EWE (Evented WEb Ontology). Este modelo permite com parar y equiparar canales y automatizaciones de distintos TASs, constituyendo un aporte considerable paraa la portabilidad de automatizaciones de usuarios entre plataformas. De igual manera, dado el carácter semántico del modelo, permite incluir en las automatizaciones elementos de fuentes externas sobre los que razonar, como es el caso de Linked Open Data. Utilizando este modelo, se ha generado un dataset de canales y automatizaciones, con los datos obtenidos de algunos de los TAS existentes en el mercado. Como último paso hacia el lograr un modelo común para describir TAS, se ha desarrollado un algoritmo para aprender ontologías de forma automática a partir de los datos del dataset. De esta forma, se favorece el descubrimiento de nuevos canales, y se reduce el coste de mantenimiento del modelo, el cual se actualiza de forma semi-automática. En conclusión, las principales contribuciones de esta tesis son: i) describir el estado del arte en automatización de tareas y acuñar el término Servicio de Automatización de Tareas, ii) desarrollar una ontología para el modelado de los componentes de TASs y automatizaciones, iii) poblar un dataset de datos de canales y automatizaciones, usado para desarrollar un algoritmo de aprendizaje automatico de ontologías, y iv) diseñar una arquitectura de agentes para la asistencia a usuarios en la creación de automatizaciones. ABSTRACT The new stage in the evolution of the Web (the Live Web or Evented Web) puts lots of social data-streams at the service of users, who no longer browse static web pages but interact with applications that present them contextual and relevant experiences. Given that each user is a potential source of events, a typical user often gets overwhelmed. To deal with that huge amount of data, multiple automation tools have emerged, covering from simple social media managers or notification aggregators to complex CRMs or smart-home Hub/Apps. As a downside, they cannot tailor to the needs of every single user. As a natural response to this downside, Task Automation Services broke in the Internet. They may be seen as a new model of mash-up technology for combining social streams, services and connected devices from an end-user perspective: end-users are empowered to connect those stream however they want, designing the automations they need. The numbers of those platforms that appeared early on shot up, and as a consequence the amount of platforms following this approach is growing fast. Being a novel field, this thesis aims to shed light on it, presenting and exemplifying the main characteristics of Task Automation Services, describing their components, and identifying several dimensions to classify them. This thesis coins the term Task Automation Services (TAS) by providing a formal definition of them, their components (called channels), as well a TAS reference architecture. There is also a lack of tools for describing automation services and automations rules. In this regard, this thesis proposes a theoretical common model of TAS and formalizes it as the EWE ontology This model enables to compare channels and automations from different TASs, which has a high impact in interoperability; and enhances automations providing a mechanism to reason over external sources such as Linked Open Data. Based on this model, a dataset of components of TAS was built, harvesting data from the web sites of actual TASs. Going a step further towards this common model, an algorithm for categorizing them was designed, enabling their discovery across different TAS. Thus, the main contributions of the thesis are: i) surveying the state of the art on task automation and coining the term Task Automation Service; ii) providing a semantic common model for describing TAS components and automations; iii) populating a categorized dataset of TAS components, used to learn ontologies of particular domains from the TAS perspective; and iv) designing an agent architecture for assisting users in setting up automations, that is aware of their context and acts in consequence.
Resumo:
El proyecto nace de un proyecto anterior donde se construyó un modelo para representar la información de los estudios superiores mediante una red de ontologías, proporcionando una definición común de conceptos importantes. Este proyecto consiste en desarrollar una herramienta capaz de generar datos educativos, a partir de la red de ontologías mencionadas anteriormente, siguiendo el paradigma de Linked Data [1]. La herramienta deberá extraer datos de diferentes fuentes educativas y transformará dichos datos educativos a datos enlazados (Linked Data). Para llevar a cabo esta labor se ha utilizado GATE Developer [2], es un entorno de desarrollo que proporciona un completo conjunto de herramientas gráficas interactivas para la creación, medición y mantenimiento de componentes de software para el procesamiento del lenguaje humano.---ABSTRACT---The project arises from a previous project in which a model was constructed to represent information of higher education through a network of ontologies, providing a common definition of important concepts. This project is to develop a tool capable of generating educational data from the ontology network mentioned above, following the paradigm of Linked Data [1]. The tool will extract data from different educational sources and transform said data to linked data (linked data). To carry out this work has been used GATE Developer [2]. It is a development environment that provides a comprehensive set of interactive graphical tools for creating, measuring and maintenance of software components for human language processing.
Resumo:
A dynamic capsid is critical to the events that shape the viral life cycle; events such as cell attachment, cell entry, and nucleic acid release demand a highly mobile viral surface. Protein mass mapping of the common cold virus, human rhinovirus 14 (HRV14), revealed both viral structural dynamics and the inhibition of such dynamics with an antiviral agent, WIN 52084. Viral capsid digestion fragments resulting from proteolytic time-course experiments provided structural information in good agreement with the HRV14 three-dimensional crystal structure. As expected, initial digestion fragments included peptides from the capsid protein VP1. This observation was expected because VP1 is the most external viral protein. Initial digestion fragments also included peptides belonging to VP4, the most internal capsid protein. The mass spectral results together with x-ray crystallography data provide information consistent with a “breathing” model of the viral capsid. Whereas the crystal structure of HRV14 shows VP4 to be the most internal capsid protein, mass spectral results show VP4 fragments to be among the first digestion fragments observed. Taken together this information demonstrates that VP4 is transiently exposed to the viral surface via viral breathing. Comparative digests of HRV14 in the presence and absence of WIN 52084 revealed a dramatic inhibition of digestion. These results indicate that the binding of the antiviral agent not only causes local conformational changes in the drug binding pocket but actually stabilizes the entire viral capsid against enzymatic degradation. Viral capsid mass mapping provides a fast and sensitive method for probing viral structural dynamics as well as providing a means for investigating antiviral drug efficacy.
Resumo:
The small subunit of calpain, a calcium-dependent cysteine protease, was found to interact with the cytoplasmic domain of the common cytokine receptor γ chain (γc) in a yeast two-hybrid interaction trap assay. This interaction was functional as demonstrated by the ability of calpain to cleave in vitro-translated wild-type γc, but not γc containing a mutation in the PEST (proline, glutamate, serine, and threonine) sequence in its cytoplasmic domain, as well as by the ability of endogenous calpain to mediate cleavage of γc in a calcium-dependent fashion. In T cell receptor-stimulated murine thymocytes, calpain inhibitors decreased cleavage of γc. Moreover, in single positive CD4+ thymocytes, not only did a calpain inhibitor augment CD3-induced proliferation, but antibodies to γc blocked this effect. Finally, treatment of cells with ionomycin could inhibit interleukin 2-induced STAT protein activation, but this inhibition could be reversed by calpain inhibitors. Together, these data suggest that calpain-mediated cleavage of γc represents a mechanism by which γc-dependent signaling can be controlled.
Resumo:
We introduce a method of functionally classifying genes by using gene expression data from DNA microarray hybridization experiments. The method is based on the theory of support vector machines (SVMs). SVMs are considered a supervised computer learning method because they exploit prior knowledge of gene function to identify unknown genes of similar function from expression data. SVMs avoid several problems associated with unsupervised clustering methods, such as hierarchical clustering and self-organizing maps. SVMs have many mathematical features that make them attractive for gene expression analysis, including their flexibility in choosing a similarity function, sparseness of solution when dealing with large data sets, the ability to handle large feature spaces, and the ability to identify outliers. We test several SVMs that use different similarity metrics, as well as some other supervised learning methods, and find that the SVMs best identify sets of genes with a common function using expression data. Finally, we use SVMs to predict functional roles for uncharacterized yeast ORFs based on their expression data.
Resumo:
Converging TGF-β and insulin-like neuroendocrine signaling pathways regulate whether Caenorhabditis elegans develops reproductively or arrests at the dauer larval stage. We examined whether neurotransmitters act in the dauer entry or recovery pathways. Muscarinic agonists promote recovery from dauer arrest induced by pheromone as well as by mutations in the TGF-β pathway. Dauer recovery in these animals is inhibited by the muscarinic antagonist atropine. Muscarinic agonists do not induce dauer recovery of either daf-2 or age-1 mutant animals, which have defects in the insulin-like signaling pathway. These data suggest that a metabotropic acetylcholine signaling pathway activates an insulin-like signal during C. elegans dauer recovery. Analogous and perhaps homologous cholinergic regulation of mammalian insulin release by the autonomic nervous system has been noted. In the parasitic nematode Ancylostoma caninum, the dauer larval stage is the infective stage, and recovery to the reproductive stage normally is induced by host factors. Muscarinic agonists also induce and atropine potently inhibits in vitro recovery of A. caninum dauer arrest. We suggest that host or parasite insulin-like signals may regulate recovery of A. caninum and could be potential targets for antihelminthic drugs.
Resumo:
Methylation of cytosine in the 5 position of the pyrimidine ring is a major modification of the DNA in most organisms. In eukaryotes, the distribution and number of 5-methylcytosines (5mC) along the DNA is heritable but can also change with the developmental state of the cell and as a response to modifications of the environment. While DNA methylation probably has a number of functions, scientific interest has recently focused on the gene silencing effect methylation can have in eukaryotic cells. In particular, the discovery of changes in the methylation level during cancer development has increased the interest in this field. In the past, a vast amount of data has been generated with different levels of resolution ranging from 5mC content of total DNA to the methylation status of single nucleotides. We present here a database for DNA methylation data that attempts to unify these results in a common resource. The database is accessible via WWW (http://www.methdb.de). It stores information about the origin of the investigated sample and the experimental procedure, and contains the DNA methylation data. Query masks allow for searching for 5mC content, species, tissue, gene, sex, phenotype, sequence ID and DNA type. The output lists all available information including the relative gene expression level. DNA methylation patterns and methylation profiles are shown both as a graphical representation and as G/A/T/C/5mC-sequences or tables with sequence positions and methylation levels, respectively.
Resumo:
Census data on endangered species are often sparse, error-ridden, and confined to only a segment of the population. Estimating trends and extinction risks using this type of data presents numerous difficulties. In particular, the estimate of the variation in year-to-year transitions in population size (the “process error” caused by stochasticity in survivorship and fecundities) is confounded by the addition of high sampling error variation. In addition, the year-to-year variability in the segment of the population that is sampled may be quite different from the population variability that one is trying to estimate. The combined effect of severe sampling error and age- or stage-specific counts leads to severe biases in estimates of population-level parameters. I present an estimation method that circumvents the problem of age- or stage-specific counts and is markedly robust to severe sampling error. This method allows the estimation of environmental variation and population trends for extinction-risk analyses using corrupted census counts—a common type of data for endangered species that has hitherto been relatively unusable for these analyses.
Resumo:
Mobile element dynamics in seven alleles of the chalcone synthase D locus (CHS-D) of the common morning glory (Ipomoea purpurea) are analyzed in the context of synonymous nucleotide sequence distances for CHS-D exons. By using a nucleotide sequence of CHS-D from the sister species Ipomoea nil (Japanese morning glory [Johzuka-Hisatomi, Y., Hoshino, A., Mori, T., Habu, Y. & Iida, S. (1999) Genes Genet. Syst. 74, 141–147], it is also possible to determine the relative frequency of insertion and loss of elements within the CHS-D locus between these two species. At least four different types of transposable elements exist upstream of the coding region, or within the single intron of the CHS-D locus in I. purpurea. There are three distinct families of miniature inverted-repeat transposable elements (MITES), and some recent transpositions of Activator/Dissociation (Ac/Ds)-like elements (Tip100), of some short interspersed repetitive elements (SINEs), and of an insertion sequence (InsIpCHSD) found in the neighborhood of this locus. The data provide no compelling evidence of the transposition of the mites since the separation of I. nil and I. purpurea roughly 8 million years ago. Finally, it is shown that the number and frequency of mobile elements are highly heterogeneous among different duplicate CHS loci, suggesting that the dynamics observed at CHS-D are locus-specific.
Resumo:
Molecular and morphological data have important roles in illuminating evolutionary history. DNA data often yield well resolved phylogenies for living taxa, but are generally unattainable for fossils. A distinct advantage of morphology is that some types of morphological data may be collected for extinct and extant taxa. Fossils provide a unique window on evolutionary history and may preserve combinations of primitive and derived characters that are not found in extant taxa. Given their unique character complexes, fossils are critical in documenting sequences of character transformation over geologic time and may elucidate otherwise ambiguous patterns of evolution that are not revealed by molecular data alone. Here, we employ a methodological approach that allows for the integration of molecular and paleontological data in deciphering one of the most innovative features in the evolutionary history of mammals—laryngeal echolocation in bats. Molecular data alone, including an expanded data set that includes new sequences for the A2AB gene, suggest that microbats are paraphyletic but do not resolve whether laryngeal echolocation evolved independently in different microbat lineages or evolved in the common ancestor of bats and was subsequently lost in megabats. When scaffolds from molecular phylogenies are incorporated into parsimony analyses of morphological characters, including morphological characters for the Eocene taxa Icaronycteris, Archaeonycteris, Hassianycteris, and Palaeochiropteryx, the resulting trees suggest that laryngeal echolocation evolved in the common ancestor of fossil and extant bats and was subsequently lost in megabats. Molecular dating suggests that crown-group bats last shared a common ancestor 52 to 54 million years ago.
Resumo:
We have investigated the spatial distributions of expansion and cell cycle in sunflower (Helianthus annuus L.) leaves located at two positions on the stem, from leaf initiation to the end of expansion. Relative expansion rate (RER) was analyzed by following the deformation of a grid drawn on the lamina; relative division rate (RDR) and flow-cytometry data were obtained in four zones perpendicular to the midrib. Calculations for determining in situ durations of the cell cycle and of S-G2-M in the epidermis are proposed. Area and cell number of a given leaf zone increased exponentially during the first two-thirds of the development duration. RER and RDR were constant and similar in all zones of a leaf and in all studied leaves during this period. Reduction in RER occurred afterward with a tip-to-base gradient and lagged behind that of RDR by 4 to 5 d in all zones. After a long period of constancy, cell-cycle duration increased rapidly and simultaneously within a leaf zone, with cells blocked in the G0-G1 phase of the cycle. Cells that began their cycle after the end of the period with exponential increase in cell number could not finish it, suggesting that they abruptly lost their competence to cross a critical step of the cycle. Differences in area and in cell number among zones of a leaf and among leaves of a plant essentially depended on the timing of two events, cessation of exponential expansion and of exponential division.
Resumo:
Trichomonads are among the earliest eukaryotes to diverge from the main line of eukaryotic descent. Keeping with their ancient nature, these facultative anaerobic protists lack two "hallmark" organelles found in most eukaryotes: mitochondria and peroxisomes. Trichomonads do, however, contain an unusual organelle involved in carbohydrate metabolism called the hydrogenosome. Like mitochondria, hydrogenosomes are double-membrane bounded organelles that produce ATP using pyruvate as the primary substrate. Hydrogenosomes are, however, markedly different from mitochondria as they lack DNA, cytochromes and the citric acid cycle. Instead, they contain enzymes typically found in anaerobic bacteria and are capable of producing molecular hydrogen. We show here that hydrogenosomes contain heat shock proteins, Hsp70, Hsp60, and Hsp10, with signature sequences that are conserved only in mitochondrial and alpha-Gram-negative purple bacterial Hsps. Biochemical analysis of hydrogenosomal Hsp60 shows that the mature protein isolated from the organelle lacks a short, N-terminal sequence, similar to that observed for most nuclear-encoded mitochondrial matrix proteins. Moreover, phylogenetic analyses of hydrogenosomal Hsp70, Hsp60, and Hsp10 show that these proteins branch within a monophyletic group composed exclusively of mitochondrial homologues. These data establish that mitochondria and hydrogenosomes have a common eubacterial ancestor and imply that the earliest-branching eukaryotes contained the endosymbiont that gave rise to mitochondria in higher eukaryotes.