729 resultados para NORMALIZATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

PAMELA (Phased Array Monitoring for Enhanced Life Assessment) SHMTM System is an integrated embedded ultrasonic guided waves based system consisting of several electronic devices and one system manager controller. The data collected by all PAMELA devices in the system must be transmitted to the controller, who will be responsible for carrying out the advanced signal processing to obtain SHM maps. PAMELA devices consist of hardware based on a Virtex 5 FPGA with a PowerPC 440 running an embedded Linux distribution. Therefore, PAMELA devices, in addition to the capability of performing tests and transmitting the collected data to the controller, have the capability of perform local data processing or pre-processing (reduction, normalization, pattern recognition, feature extraction, etc.). Local data processing decreases the data traffic over the network and allows CPU load of the external computer to be reduced. Even it is possible that PAMELA devices are running autonomously performing scheduled tests, and only communicates with the controller in case of detection of structural damages or when programmed. Each PAMELA device integrates a software management application (SMA) that allows to the developer downloading his own algorithm code and adding the new data processing algorithm to the device. The development of the SMA is done in a virtual machine with an Ubuntu Linux distribution including all necessary software tools to perform the entire cycle of development. Eclipse IDE (Integrated Development Environment) is used to develop the SMA project and to write the code of each data processing algorithm. This paper presents the developed software architecture and describes the necessary steps to add new data processing algorithms to SMA in order to increase the processing capabilities of PAMELA devices.An example of basic damage index estimation using delay and sum algorithm is provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La diabetes mellitus es un trastorno en la metabolización de los carbohidratos, caracterizado por la nula o insuficiente segregación de insulina (hormona producida por el páncreas), como resultado del mal funcionamiento de la parte endocrina del páncreas, o de una creciente resistencia del organismo a esta hormona. Esto implica, que tras el proceso digestivo, los alimentos que ingerimos se transforman en otros compuestos químicos más pequeños mediante los tejidos exocrinos. La ausencia o poca efectividad de esta hormona polipéptida, no permite metabolizar los carbohidratos ingeridos provocando dos consecuencias: Aumento de la concentración de glucosa en sangre, ya que las células no pueden metabolizarla; consumo de ácidos grasos mediante el hígado, liberando cuerpos cetónicos para aportar la energía a las células. Esta situación expone al enfermo crónico, a una concentración de glucosa en sangre muy elevada, denominado hiperglucemia, la cual puede producir a medio o largo múltiples problemas médicos: oftalmológicos, renales, cardiovasculares, cerebrovasculares, neurológicos… La diabetes representa un gran problema de salud pública y es la enfermedad más común en los países desarrollados por varios factores como la obesidad, la vida sedentaria, que facilitan la aparición de esta enfermedad. Mediante el presente proyecto trabajaremos con los datos de experimentación clínica de pacientes con diabetes de tipo 1, enfermedad autoinmune en la que son destruidas las células beta del páncreas (productoras de insulina) resultando necesaria la administración de insulina exógena. Dicho esto, el paciente con diabetes tipo 1 deberá seguir un tratamiento con insulina administrada por la vía subcutánea, adaptado a sus necesidades metabólicas y a sus hábitos de vida. Para abordar esta situación de regulación del control metabólico del enfermo, mediante una terapia de insulina, no serviremos del proyecto “Páncreas Endocrino Artificial” (PEA), el cual consta de una bomba de infusión de insulina, un sensor continuo de glucosa, y un algoritmo de control en lazo cerrado. El objetivo principal del PEA es aportar al paciente precisión, eficacia y seguridad en cuanto a la normalización del control glucémico y reducción del riesgo de hipoglucemias. El PEA se instala mediante vía subcutánea, por lo que, el retardo introducido por la acción de la insulina, el retardo de la medida de glucosa, así como los errores introducidos por los sensores continuos de glucosa cuando, se descalibran dificultando el empleo de un algoritmo de control. Llegados a este punto debemos modelar la glucosa del paciente mediante sistemas predictivos. Un modelo, es todo aquel elemento que nos permita predecir el comportamiento de un sistema mediante la introducción de variables de entrada. De este modo lo que conseguimos, es una predicción de los estados futuros en los que se puede encontrar la glucosa del paciente, sirviéndonos de variables de entrada de insulina, ingesta y glucosa ya conocidas, por ser las sucedidas con anterioridad en el tiempo. Cuando empleamos el predictor de glucosa, utilizando parámetros obtenidos en tiempo real, el controlador es capaz de indicar el nivel futuro de la glucosa para la toma de decisones del controlador CL. Los predictores que se están empleando actualmente en el PEA no están funcionando correctamente por la cantidad de información y variables que debe de manejar. Data Mining, también referenciado como Descubrimiento del Conocimiento en Bases de Datos (Knowledge Discovery in Databases o KDD), ha sido definida como el proceso de extracción no trivial de información implícita, previamente desconocida y potencialmente útil. Todo ello, sirviéndonos las siguientes fases del proceso de extracción del conocimiento: selección de datos, pre-procesado, transformación, minería de datos, interpretación de los resultados, evaluación y obtención del conocimiento. Con todo este proceso buscamos generar un único modelo insulina glucosa que se ajuste de forma individual a cada paciente y sea capaz, al mismo tiempo, de predecir los estados futuros glucosa con cálculos en tiempo real, a través de unos parámetros introducidos. Este trabajo busca extraer la información contenida en una base de datos de pacientes diabéticos tipo 1 obtenidos a partir de la experimentación clínica. Para ello emplearemos técnicas de Data Mining. Para la consecución del objetivo implícito a este proyecto hemos procedido a implementar una interfaz gráfica que nos guía a través del proceso del KDD (con información gráfica y estadística) de cada punto del proceso. En lo que respecta a la parte de la minería de datos, nos hemos servido de la denominada herramienta de WEKA, en la que a través de Java controlamos todas sus funciones, para implementarlas por medio del programa creado. Otorgando finalmente, una mayor potencialidad al proyecto con la posibilidad de implementar el servicio de los dispositivos Android por la potencial capacidad de portar el código. Mediante estos dispositivos y lo expuesto en el proyecto se podrían implementar o incluso crear nuevas aplicaciones novedosas y muy útiles para este campo. Como conclusión del proyecto, y tras un exhaustivo análisis de los resultados obtenidos, podemos apreciar como logramos obtener el modelo insulina-glucosa de cada paciente. ABSTRACT. The diabetes mellitus is a metabolic disorder, characterized by the low or none insulin production (a hormone produced by the pancreas), as a result of the malfunctioning of the endocrine pancreas part or by an increasing resistance of the organism to this hormone. This implies that, after the digestive process, the food we consume is transformed into smaller chemical compounds, through the exocrine tissues. The absence or limited effectiveness of this polypeptide hormone, does not allow to metabolize the ingested carbohydrates provoking two consequences: Increase of the glucose concentration in blood, as the cells are unable to metabolize it; fatty acid intake through the liver, releasing ketone bodies to provide energy to the cells. This situation exposes the chronic patient to high blood glucose levels, named hyperglycemia, which may cause in the medium or long term multiple medical problems: ophthalmological, renal, cardiovascular, cerebrum-vascular, neurological … The diabetes represents a great public health problem and is the most common disease in the developed countries, by several factors such as the obesity or sedentary life, which facilitate the appearance of this disease. Through this project we will work with clinical experimentation data of patients with diabetes of type 1, autoimmune disease in which beta cells of the pancreas (producers of insulin) are destroyed resulting necessary the exogenous insulin administration. That said, the patient with diabetes type 1 will have to follow a treatment with insulin, administered by the subcutaneous route, adapted to his metabolic needs and to his life habits. To deal with this situation of metabolic control regulation of the patient, through an insulin therapy, we shall be using the “Endocrine Artificial Pancreas " (PEA), which consists of a bomb of insulin infusion, a constant glucose sensor, and a control algorithm in closed bow. The principal aim of the PEA is providing the patient precision, efficiency and safety regarding the normalization of the glycemic control and hypoglycemia risk reduction". The PEA establishes through subcutaneous route, consequently, the delay introduced by the insulin action, the delay of the glucose measure, as well as the mistakes introduced by the constant glucose sensors when, decalibrate, impede the employment of an algorithm of control. At this stage we must shape the patient glucose levels through predictive systems. A model is all that element or set of elements which will allow us to predict the behavior of a system by introducing input variables. Thus what we obtain, is a prediction of the future stages in which it is possible to find the patient glucose level, being served of input insulin, ingestion and glucose variables already known, for being the ones happened previously in the time. When we use the glucose predictor, using obtained real time parameters, the controller is capable of indicating the future level of the glucose for the decision capture CL controller. The predictors that are being used nowadays in the PEA are not working correctly for the amount of information and variables that it need to handle. Data Mining, also indexed as Knowledge Discovery in Databases or KDD, has been defined as the not trivial extraction process of implicit information, previously unknown and potentially useful. All this, using the following phases of the knowledge extraction process: selection of information, pre- processing, transformation, data mining, results interpretation, evaluation and knowledge acquisition. With all this process we seek to generate the unique insulin glucose model that adjusts individually and in a personalized way for each patient form and being capable, at the same time, of predicting the future conditions with real time calculations, across few input parameters. This project of end of grade seeks to extract the information contained in a database of type 1 diabetics patients, obtained from clinical experimentation. For it, we will use technologies of Data Mining. For the attainment of the aim implicit to this project we have proceeded to implement a graphical interface that will guide us across the process of the KDD (with graphical and statistical information) of every point of the process. Regarding the data mining part, we have been served by a tool called WEKA's tool called, in which across Java, we control all of its functions to implement them by means of the created program. Finally granting a higher potential to the project with the possibility of implementing the service for Android devices, porting the code. Through these devices and what has been exposed in the project they might help or even create new and very useful applications for this field. As a conclusion of the project, and after an exhaustive analysis of the obtained results, we can show how we achieve to obtain the insulin–glucose model for each patient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El daño cerebral adquirido (DCA) es un problema social y sanitario grave, de magnitud creciente y de una gran complejidad diagnóstica y terapéutica. Su elevada incidencia, junto con el aumento de la supervivencia de los pacientes, una vez superada la fase aguda, lo convierten también en un problema de alta prevalencia. En concreto, según la Organización Mundial de la Salud (OMS) el DCA estará entre las 10 causas más comunes de discapacidad en el año 2020. La neurorrehabilitación permite mejorar el déficit tanto cognitivo como funcional y aumentar la autonomía de las personas con DCA. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma donde se puedan diseñar tratamientos que sean intensivos, personalizados, monitorizados y basados en la evidencia. Ya que son estas cuatro características las que aseguran que los tratamientos son eficaces. A diferencia de la mayor parte de las disciplinas médicas, no existen asociaciones de síntomas y signos de la alteración cognitiva que faciliten la orientación terapéutica. Actualmente, los tratamientos de neurorrehabilitación se diseñan en base a los resultados obtenidos en una batería de evaluación neuropsicológica que evalúa el nivel de afectación de cada una de las funciones cognitivas (memoria, atención, funciones ejecutivas, etc.). La línea de investigación en la que se enmarca este trabajo de investigación pretende diseñar y desarrollar un perfil cognitivo basado no sólo en el resultado obtenido en esa batería de test, sino también en información teórica que engloba tanto estructuras anatómicas como relaciones funcionales e información anatómica obtenida de los estudios de imagen. De esta forma, el perfil cognitivo utilizado para diseñar los tratamientos integra información personalizada y basada en la evidencia. Las técnicas de neuroimagen representan una herramienta fundamental en la identificación de lesiones para la generación de estos perfiles cognitivos. La aproximación clásica utilizada en la identificación de lesiones consiste en delinear manualmente regiones anatómicas cerebrales. Esta aproximación presenta diversos problemas relacionados con inconsistencias de criterio entre distintos clínicos, reproducibilidad y tiempo. Por tanto, la automatización de este procedimiento es fundamental para asegurar una extracción objetiva de información. La delineación automática de regiones anatómicas se realiza mediante el registro tanto contra atlas como contra otros estudios de imagen de distintos sujetos. Sin embargo, los cambios patológicos asociados al DCA están siempre asociados a anormalidades de intensidad y/o cambios en la localización de las estructuras. Este hecho provoca que los algoritmos de registro tradicionales basados en intensidad no funcionen correctamente y requieran la intervención del clínico para seleccionar ciertos puntos (que en esta tesis hemos denominado puntos singulares). Además estos algoritmos tampoco permiten que se produzcan deformaciones grandes deslocalizadas. Hecho que también puede ocurrir ante la presencia de lesiones provocadas por un accidente cerebrovascular (ACV) o un traumatismo craneoencefálico (TCE). Esta tesis se centra en el diseño, desarrollo e implementación de una metodología para la detección automática de estructuras lesionadas que integra algoritmos cuyo objetivo principal es generar resultados que puedan ser reproducibles y objetivos. Esta metodología se divide en cuatro etapas: pre-procesado, identificación de puntos singulares, registro y detección de lesiones. Los trabajos y resultados alcanzados en esta tesis son los siguientes: Pre-procesado. En esta primera etapa el objetivo es homogeneizar todos los datos de entrada con el objetivo de poder extraer conclusiones válidas de los resultados obtenidos. Esta etapa, por tanto, tiene un gran impacto en los resultados finales. Se compone de tres operaciones: eliminación del cráneo, normalización en intensidad y normalización espacial. Identificación de puntos singulares. El objetivo de esta etapa es automatizar la identificación de puntos anatómicos (puntos singulares). Esta etapa equivale a la identificación manual de puntos anatómicos por parte del clínico, permitiendo: identificar un mayor número de puntos lo que se traduce en mayor información; eliminar el factor asociado a la variabilidad inter-sujeto, por tanto, los resultados son reproducibles y objetivos; y elimina el tiempo invertido en el marcado manual de puntos. Este trabajo de investigación propone un algoritmo de identificación de puntos singulares (descriptor) basado en una solución multi-detector y que contiene información multi-paramétrica: espacial y asociada a la intensidad. Este algoritmo ha sido contrastado con otros algoritmos similares encontrados en el estado del arte. Registro. En esta etapa se pretenden poner en concordancia espacial dos estudios de imagen de sujetos/pacientes distintos. El algoritmo propuesto en este trabajo de investigación está basado en descriptores y su principal objetivo es el cálculo de un campo vectorial que permita introducir deformaciones deslocalizadas en la imagen (en distintas regiones de la imagen) y tan grandes como indique el vector de deformación asociado. El algoritmo propuesto ha sido comparado con otros algoritmos de registro utilizados en aplicaciones de neuroimagen que se utilizan con estudios de sujetos control. Los resultados obtenidos son prometedores y representan un nuevo contexto para la identificación automática de estructuras. Identificación de lesiones. En esta última etapa se identifican aquellas estructuras cuyas características asociadas a la localización espacial y al área o volumen han sido modificadas con respecto a una situación de normalidad. Para ello se realiza un estudio estadístico del atlas que se vaya a utilizar y se establecen los parámetros estadísticos de normalidad asociados a la localización y al área. En función de las estructuras delineadas en el atlas, se podrán identificar más o menos estructuras anatómicas, siendo nuestra metodología independiente del atlas seleccionado. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la identificación automática de lesiones utilizando estudios de imagen médica estructural, concretamente estudios de resonancia magnética. Basándose en estos cimientos, se han abrir nuevos campos de investigación que contribuyan a la mejora en la detección de lesiones. ABSTRACT Brain injury constitutes a serious social and health problem of increasing magnitude and of great diagnostic and therapeutic complexity. Its high incidence and survival rate, after the initial critical phases, makes it a prevalent problem that needs to be addressed. In particular, according to the World Health Organization (WHO), brain injury will be among the 10 most common causes of disability by 2020. Neurorehabilitation improves both cognitive and functional deficits and increases the autonomy of brain injury patients. The incorporation of new technologies to the neurorehabilitation tries to reach a new paradigm focused on designing intensive, personalized, monitored and evidence-based treatments. Since these four characteristics ensure the effectivity of treatments. Contrary to most medical disciplines, it is not possible to link symptoms and cognitive disorder syndromes, to assist the therapist. Currently, neurorehabilitation treatments are planned considering the results obtained from a neuropsychological assessment battery, which evaluates the functional impairment of each cognitive function (memory, attention, executive functions, etc.). The research line, on which this PhD falls under, aims to design and develop a cognitive profile based not only on the results obtained in the assessment battery, but also on theoretical information that includes both anatomical structures and functional relationships and anatomical information obtained from medical imaging studies, such as magnetic resonance. Therefore, the cognitive profile used to design these treatments integrates information personalized and evidence-based. Neuroimaging techniques represent an essential tool to identify lesions and generate this type of cognitive dysfunctional profiles. Manual delineation of brain anatomical regions is the classical approach to identify brain anatomical regions. Manual approaches present several problems related to inconsistencies across different clinicians, time and repeatability. Automated delineation is done by registering brains to one another or to a template. However, when imaging studies contain lesions, there are several intensity abnormalities and location alterations that reduce the performance of most of the registration algorithms based on intensity parameters. Thus, specialists may have to manually interact with imaging studies to select landmarks (called singular points in this PhD) or identify regions of interest. These two solutions have the same inconvenient than manual approaches, mentioned before. Moreover, these registration algorithms do not allow large and distributed deformations. This type of deformations may also appear when a stroke or a traumatic brain injury (TBI) occur. This PhD is focused on the design, development and implementation of a new methodology to automatically identify lesions in anatomical structures. This methodology integrates algorithms whose main objective is to generate objective and reproducible results. It is divided into four stages: pre-processing, singular points identification, registration and lesion detection. Pre-processing stage. In this first stage, the aim is to standardize all input data in order to be able to draw valid conclusions from the results. Therefore, this stage has a direct impact on the final results. It consists of three steps: skull-stripping, spatial and intensity normalization. Singular points identification. This stage aims to automatize the identification of anatomical points (singular points). It involves the manual identification of anatomical points by the clinician. This automatic identification allows to identify a greater number of points which results in more information; to remove the factor associated to inter-subject variability and thus, the results are reproducible and objective; and to eliminate the time spent on manual marking. This PhD proposed an algorithm to automatically identify singular points (descriptor) based on a multi-detector approach. This algorithm contains multi-parametric (spatial and intensity) information. This algorithm has been compared with other similar algorithms found on the state of the art. Registration. The goal of this stage is to put in spatial correspondence two imaging studies of different subjects/patients. The algorithm proposed in this PhD is based on descriptors. Its main objective is to compute a vector field to introduce distributed deformations (changes in different imaging regions), as large as the deformation vector indicates. The proposed algorithm has been compared with other registration algorithms used on different neuroimaging applications which are used with control subjects. The obtained results are promising and they represent a new context for the automatic identification of anatomical structures. Lesion identification. This final stage aims to identify those anatomical structures whose characteristics associated to spatial location and area or volume has been modified with respect to a normal state. A statistical study of the atlas to be used is performed to establish which are the statistical parameters associated to the normal state. The anatomical structures that may be identified depend on the selected anatomical structures identified on the atlas. The proposed methodology is independent from the selected atlas. Overall, this PhD corroborates the investigated research hypotheses regarding the automatic identification of lesions based on structural medical imaging studies (resonance magnetic studies). Based on these foundations, new research fields to improve the automatic identification of lesions in brain injury can be proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En un escenario de cambio en el ciclo sistémico de acumulación, la centralidad urbana adquiere un papel de espacio para la acumulación de capital económico y simbólico en torno al discurso de la excelencia y el prestigio derivado de la globalización. La producción de un espacio diferenciado y competitivo tiende a normalizar las formas de vida, contribuyendo a la aspiración coercitiva neoliberal. Tras dos décadas de máxima expansión del sistema de acumulación en Madrid, las inversiones realizadas en la Almendra Central han transformado el espacio de centralidad, facilitando la producción de un espacio excluyente en busca de ventajas para la atracción de capital. Surge así una competencia entre espacios para extender el sistema de acumulación, lo que lleva a referenciar la cotidianidad en el consumo, convertido en instrumento de referencia de la posición social. El análisis desde tres enfoques —la economía política, la forma urbana y la ideología— nos sugiere que la Almendra Central de Madrid, como construcción física y social, se ha convertido en la representación no solo del mundo de la mercancía, sino también del estatus del individuo en la globalización. ABSTRACT In the change of the systemic cycle of accumulation, urban centrality acquires a role as accumulation space for the economic and symbolic capital, around an excellence and prestige discourse, characteristic of globalisation. The production of one difference and competitive space suggests the normalization of ways of life. This contributes to the neoliberal aspiration of coercion. Before two decades of maximum expansion of accumulation cycle in Madrid, the investment in the “Almendra Central” has transform the centrality space, producing a distinguished space, searching for advantages in the attraction of capital. So it is necessary a competition between spaces inside the city to extend the accumulation system, and then, the everyday life is referenced in consumption as a social position reference. From three approaches —political economy, urban form and ideology— our analysis suggests that the “Almendra Central” in Madrid is now, not only the representation of world commodity, but also the social status in globalisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of gene-replacement therapy for inborn errors of metabolism has been hindered by the limited number of suitable large-animal models of these diseases and by inadequate methods of assessing the efficacy of treatment. Such methods should provide sensitive detection of expression in vivo and should be unaffected by concurrent pharmacologic and dietary regimens. We present the results of studies in a neonatal bovine model of citrullinemia, an inborn error of urea-cycle metabolism characterized by deficiency of argininosuccinate synthetase and consequent life-threatening hyperammonemia. Measurements of the flux of nitrogen from orally administered 15NH4 to [15N]urea were used to determine urea-cycle activity in vivo. In control animals, these isotopic measurements proved to be unaffected by pharmacologic treatments. Systemic administration of a first-generation E1-deleted adenoviral vector expressing human argininosuccinate synthetase resulted in transduction of hepatocytes and partial correction of the enzyme defect. The isotopic method showed significant restoration of urea synthesis. Moreover, the calves showed clinical improvement and normalization of plasma glutamine levels after treatment. The results show the clinical efficacy of treating a large-animal model of an inborn error of hepatocyte metabolism in conjunction with a method for sensitively measuring correction in vivo. These studies will be applicable to human trials of the treatment of this disorder and other related urea-cycle disorders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is considerable evidence from animal studies that gonadal steroid hormones modulate neuronal activity and affect behavior. To study this in humans directly, we used H215O positron-emission tomography to measure regional cerebral blood flow (rCBF) in young women during three pharmacologically controlled hormonal conditions spanning 4–5 months: ovarian suppression induced by the gonadotropin-releasing hormone agonist leuprolide acetate (Lupron), Lupron plus estradiol replacement, and Lupron plus progesterone replacement. Estradiol and progesterone were administered in a double-blind cross-over design. On each occasion positron-emission tomography scans were performed during (i) the Wisconsin Card Sorting Test, a neuropsychological test that physiologically activates prefrontal cortex (PFC) and an associated cortical network including inferior parietal lobule and posterior inferolateral temporal gyrus, and (ii) a no-delay matching-to-sample sensorimotor control task. During treatment with Lupron alone (i.e., with virtual absence of gonadal steroid hormones), there was marked attenuation of the typical Wisconsin Card Sorting Test activation pattern even though task performance did not change. Most strikingly, there was no rCBF increase in PFC. When either progesterone or estrogen was added to the Lupron regimen, there was normalization of the rCBF activation pattern with augmentation of the parietal and temporal foci and return of the dorsolateral PFC activation. These data directly demonstrate that the hormonal milieu modulates cognition-related neural activity in humans.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent experimental evidence has shown that application of certain neurotrophic factors (NTs) to the developing primary visual cortex prevents the development of ocular dominance (OD) columns. One interpretation of this result is that afferents from the lateral geniculate nucleus compete for postsynaptic trophic factor in an activity-dependent manner. Application of excess trophic factor eliminates this competition, thereby preventing OD column formation. We present a model of OD column development, incorporating Hebbian synaptic modification and activity-driven competition for NT, which accounts for both normal OD column development as well as the prevention of that development when competition is removed. In the “control” situation, when available NT is below a critical amount, OD columns form normally. These columns form without weight normalization procedures and in the presence of positive inter-eye correlations. In the “experimental” case, OD column development is prevented in a local neighborhood in which excess NT has been added. Our model proposes a biologically plausible mechanism for competition between neural populations that is motivated by several pieces of experimental data, thereby accounting for both normal and experimentally perturbed conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The human 15-lipoxygenase (15-LO) gene was transfected into rat kidneys in vivo via intra-renal arterial injection. Three days later, acute (passive) or accelerated forms of antiglomerular basement membrane antibody-mediated glomerulonephritis were induced in transfected and nontransfected or sham-transfected controls. Studies of glomerular functions (filtration and protein excretion) and ex vivo glomerular leukotriene B4 biosynthesis at 3 hr, and up to 4 days, after induction of nephritis revealed preservation or normalization of these parameters in transfected kidneys that expressed human 15-LO mRNA and mature protein, but not in contralateral control kidneys or sham-transfected animals. The results provide in vivo-derived data supporting a direct anti-inflammatory role for 15-LO during immune-mediated tissue injury.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ob/ob mouse is genetically deficient in leptin and exhibits both an obese and a mild non-insulin-dependent diabetic phenotype. To test the hypothesis that correction of the obese phenotype by leptin gene therapy will lead to the spontaneous correction of the diabetic phenotype, the ob/ob mouse was treated with a recombinant adenovirus expressing the mouse leptin cDNA. Treatment resulted in dramatic reductions in both food intake and body weight, as well as the normalization of serum insulin levels and glucose tolerance. The subsequent diminishment in serum leptin levels resulted in the rapid resumption of food intake and a gradual gain of body weight, which correlated with the gradual return of hyperinsulinemia and insulin resistance. These results not only demonstrated that the obese and diabetic phenotypes in the adult ob/ob mice are corrected by leptin gene treatment but also provide confirming evidence that body weight control may be critical in the long-term management of non-insulin-dependent diabetes mellitus in obese patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe the use of singular value decomposition in transforming genome-wide expression data from genes × arrays space to reduced diagonalized “eigengenes” × “eigenarrays” space, where the eigengenes (or eigenarrays) are unique orthonormal superpositions of the genes (or arrays). Normalizing the data by filtering out the eigengenes (and eigenarrays) that are inferred to represent noise or experimental artifacts enables meaningful comparison of the expression of different genes across different arrays in different experiments. Sorting the data according to the eigengenes and eigenarrays gives a global picture of the dynamics of gene expression, in which individual genes and arrays appear to be classified into groups of similar regulation and function, or similar cellular state and biological phenotype, respectively. After normalization and sorting, the significant eigengenes and eigenarrays can be associated with observed genome-wide effects of regulators, or with measured samples, in which these regulators are overactive or underactive, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ob/ob mouse is genetically deficient in leptin and exhibits a phenotype that includes obesity and non-insulin-dependent diabetes melitus. This phenotype closely resembles the morbid obesity seen in humans. In this study, we demonstrate that a single intramuscular injection of a recombinant adeno-associated virus (AAV) vector encoding mouse leptin (rAAV-leptin) in ob/ob mice leads to prevention of obesity and diabetes. The treated animals show normalization of metabolic abnormalities including hyperglycemia, insulin resistance, impaired glucose tolerance, and lethargy. The effects of a single injection have lasted through the 6-month course of the study. At all time points measured the circulating levels of leptin in the serum were similar to age-matched control C57 mice. These results demonstrate that maintenance of normal levels of leptin (2–5 ng/ml) in the circulation can prevent both the onset of obesity and associated non-insulin-dependent diabetes. Thus a single injection of a rAAV vector expressing a therapeutic gene can lead to complete and long-term correction of a genetic disorder. Our study demonstrates the long-term correction of a disease caused by a genetic defect and proves the feasibility of using rAAV-based vectors for the treatment of chronic disorders like obesity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We set out to define patterns of gene expression during kidney organogenesis by using high-density DNA array technology. Expression analysis of 8,740 rat genes revealed five discrete patterns or groups of gene expression during nephrogenesis. Group 1 consisted of genes with very high expression in the early embryonic kidney, many with roles in protein translation and DNA replication. Group 2 consisted of genes that peaked in midembryogenesis and contained many transcripts specifying proteins of the extracellular matrix. Many additional transcripts allied with groups 1 and 2 had known or proposed roles in kidney development and included LIM1, POD1, GFRA1, WT1, BCL2, Homeobox protein A11, timeless, pleiotrophin, HGF, HNF3, BMP4, TGF-α, TGF-β2, IGF-II, met, FGF7, BMP4, and ganglioside-GD3. Group 3 consisted of transcripts that peaked in the neonatal period and contained a number of retrotransposon RNAs. Group 4 contained genes that steadily increased in relative expression levels throughout development, including many genes involved in energy metabolism and transport. Group 5 consisted of genes with relatively low levels of expression throughout embryogenesis but with markedly higher levels in the adult kidney; this group included a heterogeneous mix of transporters, detoxification enzymes, and oxidative stress genes. The data suggest that the embryonic kidney is committed to cellular proliferation and morphogenesis early on, followed sequentially by extracellular matrix deposition and acquisition of markers of terminal differentiation. The neonatal burst of retrotransposon mRNA was unexpected and may play a role in a stress response associated with birth. Custom analytical tools were developed including “The Equalizer” and “eBlot,” which contain improved methods for data normalization, significance testing, and data mining.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hyperglycemia is a common feature of diabetes mellitus. It results from a decrease in glucose utilization by the liver and peripheral tissues and an increase in hepatic glucose production. Glucose phosphorylation by glucokinase is an initial event in glucose metabolism by the liver. However, glucokinase gene expression is very low in diabetic animals. Transgenic mice expressing the P-enolpyruvate carboxykinase/glucokinase chimeric gene were generated to study whether the return of the expression of glucokinase in the liver of diabetic mice might prevent metabolic alterations. In contrast to nontransgenic mice treated with streptozotocin, mice with the transgene previously treated with streptozotocin showed high levels of both glucokinase mRNA and its enzyme activity in the liver, which were associated with an increase in intracellular levels of glucose 6-phosphate and glycogen. The liver of these mice also showed an increase in pyruvate kinase activity and lactate production. Furthermore, normalization of both the expression of genes involved in gluconeogenesis and ketogenesis in the liver and the production of glucose and ketone body by hepatocytes in primary culture were observed in streptozotocin-treated transgenic mice. Thus, glycolysis was induced while gluconeogenesis and ketogenesis were blocked in the liver of diabetic mice expressing glucokinase. This was associated with normalization of blood glucose, ketone bodies, triglycerides, and free fatty acids even in the absence of insulin. These results suggest that the expression of glucokinase during diabetes might be a new approach to the normalization of hyperglycemia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new and highly effective method, termed suppression subtractive hybridization (SSH), has been developed for the generation of subtracted cDNA libraries. It is based primarily on a recently described technique called suppression PCR and combines normalization and subtraction in a single procedure. The normalization step equalizes the abundance of cDNAs within the target population and the subtraction step excludes the common sequences between the target and driver populations. In a model system, the SSH technique enriched for rare sequences over 1,000-fold in one round of subtractive hybridization. We demonstrate its usefulness by generating a testis-specific cDNA library and by using the subtracted cDNA mixture as a hybridization probe to identify homologous sequences in a human Y chromosome cosmid library. The human DNA inserts in the isolated cosmids were further confirmed to be expressed in a testis-specific manner. These results suggest that the SSH technique is applicable to many molecular genetic and positional cloning studies for the identification of disease, developmental, tissue-specific, or other differentially expressed genes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent studies have demonstrated that the overexpression of the c-myc gene in the liver of transgenic mice leads to an increase in both utilization and accumulation of glucose in the liver, suggesting that c-Myc transcription factor is involved in the control of liver carbohydrate metabolism in vivo. To determine whether the increase in c-Myc might control glucose homeostasis, an intraperitoneal glucose tolerance test was performed. Transgenic mice showed lower levels of blood glucose than control animals, indicating that the overexpression of c-Myc led to an increase of blood glucose disposal by the liver. Thus, the increase in c-Myc might counteract diabetic hyperglycemia. In contrast to control mice, transgenic mice treated with streptozotocin showed normalization of concentrations of blood glucose, ketone bodies, triacylglycerols and free fatty acids in the absence of insulin. These findings resulted from the normalization of liver metabolism in these animals. While low glucokinase activity was detected in the liver of diabetic control mice, high levels of both glucokinase mRNA and enzyme activity were noted in the liver of streptozotocin-treated transgenic mice, which led to an increase in intracellular levels of glucose 6-phosphate and glycogen. The liver of these mice also showed an increase in pyruvate kinase activity and lactate production. Furthermore, normalization of both the expression of genes involved in the control of gluconeogenesis and ketogenesis and the production of glucose and ketone bodies was observed in streptozotocin-treated transgenic mice. Thus, these results suggested that c-Myc counteracted diabetic alterations through its ability to induce hepatic glucose uptake and utilization and to block the activation of gluconeogenesis and ketogenesis.