951 resultados para vector auto-regressive model
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
Durante las últimas décadas se ha producido un fenómeno global de envejecimiento en la población. Esta tendencia se puede observar prácticamente en todos los países del mundo y se debe principalmente a los avances en la medicina, y a los descensos en las tasas de fertilidad y mortalidad. El envejecimiento de la población tiene un gran impacto en la salud de los ciudadanos, y a menudo es la causa de aparición de enfermedades crónicas. Este tipo de enfermedades supone una amenaza y una carga importantes para la sociedad, especialmente en aspectos como la mortalidad o los gastos en los sistemas sanitarios. Entre las enfermedades cardiovasculares, la insuficiencia cardíaca es probablemente la condición con mayor prevalencia y afecta a 23-26 millones de personas en todo el mundo. Normalmente, la insuficiencia cardíaca presenta un mal pronóstico y una tasa de supervivencia bajas, en algunos casos peores que algún tipo de cáncer. Además, suele ser la causa de hospitalizaciones frecuentes y es una de las enfermedades más costosas para los sistemas sanitarios. La tendencia al envejecimiento de la población y la creciente incidencia de las enfermedades crónicas están llevando a una situación en la que los sistemas de salud no son capaces de hacer frente a la demanda de la sociedad. Los servicios de salud existentes tendrán que adaptarse para ser efectivos y sostenibles en el futuro. Es necesario identificar nuevos paradigmas de cuidado de pacientes, así como mecanismos para la provisión de servicios que ayuden a transformar estos sistemas sanitarios. En este contexto, esta tesis se plantea la búsqueda de soluciones, basadas en las Tecnologías de la Información y la Comunicación (TIC), que contribuyan a realizar la transformación en los sistemas sanitarios. En concreto, la tesis se centra en abordar los problemas de una de las enfermedades con mayor impacto en estos sistemas: la insuficiencia cardíaca. Las siguientes hipótesis constituyen la base para la realización de este trabajo de investigación: 1. Es posible definir un modelo basado en el paradigma de lazo cerrado y herramientas TIC que formalice el diseño de mejores servicios para pacientes con insuficiencia cardíaca. 2. El modelo de lazo cerrado definido se puede utilizar para definir un servicio real que ayude a gestionar la insuficiencia cardíaca crónica. 3. La introducción, la adopción y el uso de un servicio basado en el modelo definido se traducirá en mejoras en el estado de salud de los pacientes que sufren insuficiencia cardíaca. a. La utilización de un sistema basado en el modelo de lazo cerrado definido mejorará la experiencia del usuario de los pacientes. La definición del modelo planteado se ha basado en el estándar ISO / EN 13940- Sistema de conceptos para dar soporte a la continuidad de la asistencia. Comprende un conjunto de conceptos, procesos, flujos de trabajo, y servicios como componentes principales, y representa una formalización de los servicios para los pacientes con insuficiencia cardíaca. Para evaluar el modelo definido se ha definido un servicio real basado en el mismo, además de la implementación de un sistema de apoyo a dicho servicio. El diseño e implementación de dicho sistema se realizó siguiendo la metodología de Diseño Orientado a Objetivos. El objetivo de la evaluación consistía en investigar el efecto que tiene un servicio basado en el modelo de lazo cerrado sobre el estado de salud de los pacientes con insuficiencia cardíaca. La evaluación se realizó en el marco de un estudio clínico observacional. El análisis de los resultados ha comprendido métodos de análisis cuantitativos y cualitativos. El análisis cuantitativo se ha centrado en determinar el estado de salud de los pacientes en base a datos objetivos (obtenidos en pruebas de laboratorio o exámenes médicos). Para realizar este análisis se definieron dos índices específicos: el índice de estabilidad y el índice de la evolución del estado de salud. El análisis cualitativo ha evaluado la autopercepción del estado de salud de los pacientes en términos de calidad de vida, auto-cuidado, el conocimiento, la ansiedad y la depresión, así como niveles de conocimiento. Se ha basado en los datos recogidos mediante varios cuestionarios o instrumentos estándar (i.e. EQ-5D, la Escala de Ansiedad y Depresión (HADS), el Cuestionario de Cardiomiopatía de Kansas City (KCCQ), la Escala Holandesa de Conocimiento de Insuficiencia Cardíaca (DHFKS), y la Escala Europea de Autocuidado en Insuficiencia Cardíaca (EHFScBS), así como cuestionarios dedicados no estandarizados de experiencia de usuario. Los resultados obtenidos en ambos análisis, cuantitativo y cualitativo, se compararon con el fin de evaluar la correlación entre el estado de salud objetivo y subjetivo de los pacientes. Los resultados de la validación demostraron que el modelo propuesto tiene efectos positivos en el cuidado de los pacientes con insuficiencia cardíaca y contribuye a mejorar su estado de salud. Asimismo, ratificaron al modelo como instrumento válido para la definición de servicios mejorados para la gestión de esta enfermedad. ABSTRACT During the last decades we have witnessed a global aging phenomenon in the population. This can be observed in practically every country in the world, and it is mainly caused by the advances in medicine, and the decrease of mortality and fertility rates. Population aging has an important impact on citizens’ health and it is often the cause for chronic diseases, which constitute global burden and threat to the society in terms of mortality and healthcare expenditure. Among chronic diseases, Chronic Heart Failure (CHF) or Heart Failure (HF) is probably the one with highest prevalence, affecting between 23 and 26 million people worldwide. Heart failure is a chronic, long-term and serious condition with very poor prognosis and worse survival rates than some type of cancers. Additionally, it is often the cause of frequent hospitalizations and one of the most expensive conditions for the healthcare systems. The aging trends in the population and the increasing incidence of chronic diseases are leading to a situation where healthcare systems are not able to cope with the society demand. Current healthcare services will have to be adapted and redefined in order to be effective and sustainable in the future. There is a need to find new paradigms for patients’ care, and to identify new mechanisms for services’ provision that help to transform the healthcare systems. In this context, this thesis aims to explore new solutions, based on ICT, that contribute to achieve the needed transformation within the healthcare systems. In particular, it focuses on addressing the problems of one of the diseases with higher impact within these systems: Heart Failure. The following hypotheses represent the basis to the elaboration of this research: 1. It is possible to define a model based on a closed-loop paradigm and ICT tools that formalises the design of enhanced healthcare services for chronic heart failure patients. 2. The described closed-loop model can be exemplified in a real service that supports the management of chronic heart failure disease. 3. The introduction, adoption and use of a service based on the outlined model will result in improvements in the health status of patients suffering heart failure. 4. The user experience of patients when utilizing a system based on the defined closed-loop model will be enhanced. The definition of the closed-loop model for health care support of heart failure patients have been based on the standard ISO/EN 13940 System of concepts to support continuity of care. It includes a set of concept, processes and workflows, and services as main components, and it represent a formalization of services for heart failure patients. In order to be validated, the proposed closed-loop model has been instantiated into a real service and a supporting IT system. The design and implementation of the system followed the user centred design methodology Goal Oriented Design. The validation, that included an observational clinical study, aimed to investigate the effect that a service based on the closed-loop model had on heart failure patients’ health status. The analysis of results comprised quantitative and qualitative analysis methods. The quantitative analysis was focused on determining the health status of patients based on objective data (obtained in lab tests or physical examinations). Two specific indexes where defined and considered in this analysis: the stability index and the health status evolution index. The qualitative analysis assessed the self-perception of patients’ health status in terms of quality of life, self-care, knowledge, anxiety and depression, as well as knowledge levels. It was based on the data gathered through several standard instruments (i.e. EQ-5D, the Hospital Anxiety and Depression Scale, the Kansas City Cardiomyopathy Questionnaire, the Dutch Heart Failure Knowledge Scale, and the European Heart Failure Self-care Behaviour Scale) as well as dedicated non-standardized user experience questionnaires. The results obtained in both analyses, quantitative and qualitative, were compared in order to assess the correlation between the objective and subjective health status of patients. The results of the validation showed that the proposed model contributed to improve the health status of the patients and had a positive effect on the patients’ care. It also proved that the model is a valid instrument for designing enhanced healthcare services for heart failure patients.
Resumo:
The development of gene-replacement therapy for inborn errors of metabolism has been hindered by the limited number of suitable large-animal models of these diseases and by inadequate methods of assessing the efficacy of treatment. Such methods should provide sensitive detection of expression in vivo and should be unaffected by concurrent pharmacologic and dietary regimens. We present the results of studies in a neonatal bovine model of citrullinemia, an inborn error of urea-cycle metabolism characterized by deficiency of argininosuccinate synthetase and consequent life-threatening hyperammonemia. Measurements of the flux of nitrogen from orally administered 15NH4 to [15N]urea were used to determine urea-cycle activity in vivo. In control animals, these isotopic measurements proved to be unaffected by pharmacologic treatments. Systemic administration of a first-generation E1-deleted adenoviral vector expressing human argininosuccinate synthetase resulted in transduction of hepatocytes and partial correction of the enzyme defect. The isotopic method showed significant restoration of urea synthesis. Moreover, the calves showed clinical improvement and normalization of plasma glutamine levels after treatment. The results show the clinical efficacy of treating a large-animal model of an inborn error of hepatocyte metabolism in conjunction with a method for sensitively measuring correction in vivo. These studies will be applicable to human trials of the treatment of this disorder and other related urea-cycle disorders.
Resumo:
6-Hydroxydopamine (6-OHDA) is widely used to selectively lesion dopaminergic neurons of the substantia nigra (SN) in the creation of animal models of Parkinson’s disease. In vitro, the death of PC-12 cells caused by exposure to 6-OHDA occurs with characteristics consistent with an apoptotic mechanism of cell death. To test the hypothesis that apoptotic pathways are involved in the death of dopaminergic neurons of the SN caused by 6-OHDA, we created a replication-defective genomic herpes simplex virus-based vector containing the coding sequence for the antiapoptotic peptide Bcl-2 under the transcriptional control of the simian cytomegalovirus immediate early promoter. Transfection of primary cortical neurons in culture with the Bcl-2-producing vector protected those cells from naturally occurring cell death over 3 weeks. Injection of the Bcl-2-expressing vector into SN of rats 1 week before injection of 6-OHDA into the ipsilateral striatum increased the survival of neurons in the SN, detected either by retrograde labeling of those cells with fluorogold or by tyrosine hydroxylase immunocytochemistry, by 50%. These results, demonstrating that death of nigral neurons induced by 6-OHDA lesioning may be blocked by the expression of Bcl-2, are consistent with the notion that cell death in this model system is at least in part apoptotic in nature and suggest that a Bcl-2-expressing vector may have therapeutic potential in the treatment of Parkinson’s disease.
Resumo:
Elastic fibers consist of two morphologically distinct components: elastin and 10-nm fibrillin-containing microfibrils. During development, the microfibrils form bundles that appear to act as a scaffold for the deposition, orientation, and assembly of tropoelastin monomers into an insoluble elastic fiber. Although microfibrils can assemble independent of elastin, tropoelastin monomers do not assemble without the presence of microfibrils. In the present study, immortalized ciliary body pigmented epithelial (PE) cells were investigated for their potential to serve as a cell culture model for elastic fiber assembly. Northern analysis showed that the PE cells express microfibril proteins but do not express tropoelastin. Immunofluorescence staining and electron microscopy confirmed that the microfibril proteins produced by the PE cells assemble into intact microfibrils. When the PE cells were transfected with a mammalian expression vector containing a bovine tropoelastin cDNA, the cells were found to express and secrete tropoelastin. Immunofluorescence and electron microscopic examination of the transfected PE cells showed the presence of elastic fibers in the matrix. Biochemical analysis of this matrix showed the presence of cross-links that are unique to mature insoluble elastin. Together, these results indicate that the PE cells provide a unique, stable in vitro system in which to study elastic fiber assembly.
Resumo:
One of the current limitations of gene transfer protocols involving mammalian genomes is the lack of spatial and temporal control over the desired gene manipulation. Starting from a human keratin gene showing a complex regulation as a template, we identified regulatory sequences that confer inducible gene expression in a subpopulation of keratinocytes in stratified epithelia of adult transgenic mice. We used this cassette to produce transgenic mice with an inducible skin blistering phenotype mimicking a form of epidermolytic hyperkeratosis, a keratin gene disorder. Upon induction by topical application of a phorbol ester, the mutant keratin transgene product accumulates in the differentiating layers of epidermis, leading to keratinocyte lysis after application of mechanical trauma. This mouse model will allow for a better understanding of the complex relationship between keratin mutation, keratinocyte cytoarchitecture, and hypersensitivity to trauma. The development of an inducible expression vector showing an exquisite cellular specificity has important implications for manipulating genes in a spatially and temporally controlled fashion in transgenic mice, and for the design of gene therapy strategies using skin as a tissue source for the controlled delivery of foreign substances.
Resumo:
A recombinant adeno-associated virus (rAAV) vector capable of infecting cells and expressing rat glial cell line-derived neurotrophic factor (rGDNF), a putative central nervous system dopaminergic survival factor, under the control of a potent cytomegalovirus (CMV) immediate/early promoter (AAV-MD-rGDNF) was constructed. Two experiments were performed to evaluate the time course of expression of rAAV-mediated GDNF protein expression and to test the vector in an animal model of Parkinson’s disease. To evaluate the ability of rAAV-rGDNF to protect nigral dopaminergic neurons in the progressive Sauer and Oertel 6-hydroxydopamine (6-OHDA) lesion model, rats received perinigral injections of either rAAV-rGDNF virus or rAAV-lacZ control virus 3 weeks prior to a striatal 6-OHDA lesion and were sacrificed 4 weeks after 6-OHDA. Cell counts of back-labeled fluorogold-positive neurons in the substantia nigra revealed that rAAV-MD-rGDNF protected a significant number of cells when compared with cell counts of rAAV-CMV-lacZ-injected rats (94% vs. 51%, respectively). In close agreement, 85% of tyrosine hydroxylase-positive cells remained in the nigral rAAV-MD-rGDNF group vs. only 49% in the lacZ group. A separate group of rats were given identical perinigral virus injections and were sacrificed at 3 and 10 weeks after surgery. Nigral GDNF protein expression remained relatively stable over the 10 weeks investigated. These data indicate that the use of rAAV, a noncytopathic viral vector, can promote delivery of functional levels of GDNF in a degenerative model of Parkinson’s disease.
Resumo:
Adenoviral vectors can direct high-level expression of a transgene, but, due to a host immune response to adenoviral antigens, expression is of limited duration, and repetitive administration has generally been unsuccessful. Exposure to foreign proteins beginning in the neonatal period may alter or ablate the immune response. We injected adult and neonatal (immunocompetent) CD-1 mice intravenously with an adenoviral vector expressing human blood coagulation factor IX. In both groups of mice, expression of human factor IX persisted for 12-16 weeks. However, in mice initially injected as adults, repeat administration of the vector resulted in no detectable expression of the transgene, whereas in mice initially injected in the neonatal period, repeat administration resulted in high-level expression of human factor IX. We show that animals that fail to express the transgene on repeat administration have developed high-titer neutralizing antibodies to adenovirus, whereas those that do express factor IX have not. This experimental model suggests that newborn mice can be tolerized to adenoviral vectors and demonstrates that at least one repeat injection of the adenoviral vector is possible; the model will be useful in elucidating the immunologic mechanisms underlying successful repeat administration of adenoviral vectors.
Resumo:
An immunoglobulin light chain protein was isolated from the urine of an individual (BRE) with systemic amyloidosis. Complete amino acid sequence of the variable region of the light chain (VL) protein established it as a kappa I, which when compared with other kappa I amyloid associated proteins had unique residues, including Ile-34, Leu-40, and Tyr-71. To study the tertiary structure, BRE VL was expressed in Escherichia coli by using a PCR product amplified from the patient BRE's bone marrow DNA. The PCR product was ligated into pCZ11, a thermal-inducible replication vector. Recombinant BRE VL was isolated, purified to homogeneity, and crystallized by using ammonium sulfate as the precipitant. Two crystal forms were obtained. In crystal form I the BRE VL kappa domain crystallizes as a dimer with unit cell constants isomorphous to previously published kappa protein structures. Comparison with a nonamyloid VL kappa domain from patient REI, identified significant differences in position of residues in the hypervariable segments plus variations in framework region (FR) segments 40-46 (FR2) and 66-67 (FR3). In addition, positional differences can be seen along the two types of local diads, corresponding to the monomer-monomer and dimer-dimer interfaces. From the packing diagram, a model for the amyloid light chain (AL) fibril is proposed based on a pseudohexagonal spiral structure with a rise of approximately the width of two dimers per 360 degree turn. This spiral structure could be consistent with the dimensions of amyloid fibrils as determined by electron microscopy.
Resumo:
Rodent tumor cells engineered to secrete cytokines such as interleukin 2 (IL-2) or IL-4 are rejected by syngeneic recipients due to an enhanced antitumor host immune response. An adenovirus vector (AdCAIL-2) containing the human IL-2 gene has been constructed and shown to direct secretion of high levels of human IL-2 in infected tumor cells. AdCAIL-2 induces regression of tumors in a transgenic mouse model of mammary adenocarcinoma following intratumoral injection. Elimination of existing tumors in this way results in immunity against a second challenge with tumor cells. These findings suggest that adenovirus vectors expressing cytokines may form the basis for highly effective immunotherapies of human cancers.
Resumo:
The promoter of the bean PAL2 gene (encoding phenylalanine ammonia-lyase; EC 4.3.1.5) is a model for studies of tissue-restricted gene expression in plants. Petal epidermis is one of the tissues in which this promoter is activated in tobacco. Previous work suggested that a major factor establishing the pattern of PAL2 expression in tobacco petals is the tissue distribution of a protein closely related to Myb305, which is a Myb-like transcriptional activator from snapdragon. In the present work, we show that Myb305 expression in tobacco leaves causes ectopic activation of the PAL2 promoter. To achieve Myb305 expression in planta, a viral expression vector was used. This approach combines the utility of transient assays with the possibility of direct biochemical detection of the introduced factor and may have wider application for studying the function of plant transcription factors.
Resumo:
Polyamide ("peptide") nucleic acids (PNAs) are molecules with antigene and antisense effects that may prove to be effective neuropharmaceuticals if these molecules are enabled to undergo transport through the brain capillary endothelial wall, which makes up the blood-brain barrier in vivo. The model PNA used in the present studies is an 18-mer that is antisense to the rev gene of human immunodeficiency virus type 1 and is biotinylated at the amino terminus and iodinated at a tyrosine residue near the carboxyl terminus. The biotinylated PNA was linked to a conjugate of streptavidin (SA) and the OX26 murine monoclonal antibody to the rat transferrin receptor. The blood-brain barrier is endowed with high transferrin receptor concentrations, enabling the OX26-SA conjugate to deliver the biotinylated PNA to the brain. Although the brain uptake of the free PNA was negligible following intravenous administration, the brain uptake of the PNA was increased at least 28-fold when the PNA was bound to the OX26-SA vector. The brain uptake of the PNA bound to the OX26-SA vector was 0.1% of the injected dose per gram of brain at 60 min after an intravenous injection, approximating the brain uptake of intravenously injected morphine. The PNA bound to the OX26-SA vector retained the ability to bind to synthetic rev mRNA as shown by RNase protection assays. In summary, the present studies show that while the transport of PNAs across the blood-brain barrier is negligible, delivery of these potential neuropharmaceutical drugs to the brain may be achieved by coupling them to vector-mediated peptide-drug delivery systems.
Resumo:
Successful gene transfer into stem cells would provide a potentially useful therapeutic modality for treatment of inherited and acquired disorders affecting hematopoietic tissues. Coculture of primate bone marrow cells with retroviral producer cells, autologous stroma, or an engineered stromal cell line expressing human stem cell factor has resulted in a low efficiency of gene transfer as reflected by the presence of 0.1-5% of genetically modified cells in the blood of reconstituted animals. Our experiments in a nonhuman primate model were designed to explore various transduction protocols that did not involve coculture in an effort to define clinically useful conditions and to enhance transduction efficiency of repopulating cells. We report the presence of genetically modified cells at levels ranging from 0.1% (granulocytes) to 14% (B lymphocytes) more than 1 year following reconstitution of myeloablated animals with CD34+ immunoselected cells transduced in suspension culture with cytokines for 4 days with a retrovirus containing the glucocerebrosidase gene. A period of prestimulation for 7 days in the presence of autologous stroma separated from the CD34+ cells by a porous membrane did not appear to enhance transduction efficiency. Infusion of transduced CD34+ cells into animals without myeloablation resulted in only transient appearance of genetically modified cells in peripheral blood. Our results document that retroviral transduction of primate repopulating cells can be achieved without coculture with stroma or producer cells and that the proportion of genetically modified cells may be highest in the B-lymphoid lineage under the given transduction conditions.
Resumo:
We discuss the well-posedness of a mathematical model that is used in the literature for the simulation of lithium-ion batteries. First, a mathematical model based on a macrohomogeneous approach is presented, following previous work. Then it is shown, from a physical and a mathematical point of view, that a boundary condition widely used in the literature is not correct. Although the errors could be just sign typos (which can be explained as carelessness in the use of d/dx versus d/dn, with n the outward unit vector) and authors using this model probably use the correct boundary condition when they solve it in order to do simulations, readers should be aware of the right choice. Therefore, the deduction of the correct boundary condition is done here, and a mathematical study of the well-posedness of the corresponding problem is presented.
Resumo:
A teoria de Jean Piaget sobre o desenvolvimento da inteligência tem sido utilizada na área de inteligência computacional como inspiração para a proposição de modelos de agentes cognitivos. Embora os modelos propostos implementem aspectos básicos importantes da teoria de Piaget, como a estrutura do esquema cognitivo, não consideram o problema da fundamentação simbólica e, portanto, não se preocupam com os aspectos da teoria que levam à aquisição autônoma da semântica básica para a organização cognitiva do mundo externo, como é o caso da aquisição da noção de objeto. Neste trabalho apresentamos um modelo computacional de esquema cognitivo inspirado na teoria de Piaget sobre a inteligência sensório-motora que se desenvolve autonomamente construindo mecanismos por meio de princípios computacionais pautados pelo problema da fundamentação simbólica. O modelo de esquema proposto tem como base a classificação de situações sensório-motoras utilizadas para a percepção, captação e armazenamento das relações causais determiníscas de menor granularidade. Estas causalidades são então expandidas espaço-temporalmente por estruturas mais complexas que se utilizam das anteriores e que também são projetadas de forma a possibilitar que outras estruturas computacionais autônomas mais complexas se utilizem delas. O modelo proposto é implementado por uma rede neural artificial feed-forward cujos elementos da camada de saída se auto-organizam para gerar um grafo sensóriomotor objetivado. Alguns mecanismos computacionais já existentes na área de inteligência computacional foram modificados para se enquadrarem aos paradigmas de semântica nula e do desenvolvimento mental autônomo, tomados como base para lidar com o problema da fundamentação simbólica. O grafo sensório-motor auto-organizável que implementa um modelo de esquema inspirado na teoria de Piaget proposto neste trabalho, conjuntamente com os princípios computacionais utilizados para sua concepção caminha na direção da busca pelo desenvolvimento cognitivo artificial autônomo da noção de objeto.