961 resultados para fusion and centric inversion
Resumo:
Este PFC es un trabajo muy práctico, los objetivos fueron impuestos por el tutor, como parte del desarrollo de herramientas (software y hardware) que serán utilizados posteriormente a nivel de docencia e investigación. El PFC tiene dos áreas de trabajo, la principal y primera que se expone es la utilización de una herramienta de simulación térmica para caracterizar dispositivos semiconductores con disipador, la segunda es la expansión de una tarjeta de adquisición de datos con unas PCBs diseñadas, que no estaban disponibles comercialmente. Se ha probado y configurado “Autodesk 2013 Inventor Fusion” y “Autodesk 2013 Simulation and Multiphysics” para simulación térmica de dispositivos de alta potencia. Estas aplicaciones son respectivamente de diseño mecánico y simulación térmica, y la UPM dispone actualmente de licencia. En esta parte del proyecto se realizará un manual de utilización, para que se continúe con esta línea de trabajo en otros PFC. Además se han diseñado mecánicamente y simulado térmicamente diodos LED de alta potencia luminosa (High Brightness Lights Emitting Diodes, HB-LEDs), tanto blancos como del ultravioleta cercano (UVA). Las simulaciones térmicas son de varios tipos de LEDs que actualmente se están empleando y caracterizando térmicamente en Proyectos Fin de Carrera y una Tesis doctoral. En la segunda parte del PFC se diseñan y realizan unas placas de circuito impreso (PCB) cuya función es formar parte de sistemas de instrumentación de adquisición automática de datos basados en LabVIEW. Con esta instrumentación se pueden realizar ensayos de fiabilidad y de otro tipo a dispositivos y sistemas electrónicos. ABSTRACT. The PFC is a very practical work, the objectives were set by the tutor, as part of the development of tools (software and hardware) that will be used later at level of teaching and research. The PFC has two parts, the first one explains the use of a software tool about thermal simulation to characterize devices semiconductors with heatsink, and second one is the expansion of card data acquisition with a PCBs designed, which were not available commercially. It has been tested and configured "Autodesk 2013 Inventor Fusion" and "Autodesk 2013 Simulation Multiphysics” for thermal simulation of high power devices. These applications are respectively of mechanical design and thermal simulation, and the UPM has at present license. In this part of the project a manual of use will be realized, so that it is continued by this line of work in other PFC. Also they have been designed mechanically and simulated thermally LEDs light (High Brightness Lights Emitting Diodes , HB- LEDs) both white and ultraviolet. Thermal simulations are several types of LEDs are now being used in thermally characterizing in Thesis and PhD. In the second part of the PFC there are designed and realized circuit board (PCB) whose function is to be a part of instrumentation systems of automatic acquisition based on LabVIEW data. With this instrumentation can perform reliability testing and other electronic devices and systems.
Resumo:
La teledetección o percepción remota (remote sensing) es la ciencia que abarca la obtención de información (espectral, espacial, temporal) sobre un objeto, área o fenómeno a través del análisis de datos adquiridos por un dispositivo que no está en contacto con el elemento estudiado. Los datos obtenidos a partir de la teledetección para la observación de la superficie terrestre comúnmente son imágenes, que se caracterizan por contar con un sinnúmero de aplicaciones que están en continua evolución, por lo cual para solventar los constantes requerimientos de nuevas aplicaciones a menudo se proponen nuevos algoritmos que mejoran o facilitan algún proceso en particular. Para el desarrollo de dichos algoritmos, es preciso hacer uso de métodos matemáticos que permitan la manipulación de la información con algún fin específico. Dentro de estos métodos, el análisis multi-resolución se caracteriza por permitir analizar una señal en diferentes escalas, lo que facilita trabajar con datos que puedan tener resoluciones diferentes, tal es el caso de las imágenes obtenidas mediante teledetección. Una de las alternativas para la implementación de análisis multi-resolución es la Transformada Wavelet Compleja de Doble Árbol (DT-CWT). Esta transformada se implementa a partir de dos filtros reales y se caracteriza por presentar invariancia a traslaciones, precio a pagar por su característica de no ser críticamente muestreada. A partir de las características de la DT-CWT se propone su uso en el diseño de algoritmos de procesamiento de imagen, particularmente imágenes de teledetección. Estos nuevos algoritmos de procesamiento digital de imágenes de teledetección corresponden particularmente a fusión y detección de cambios. En este contexto esta tesis presenta tres algoritmos principales aplicados a fusión, evaluación de fusión y detección de cambios en imágenes. Para el caso de fusión de imágenes, se presenta un esquema general que puede ser utilizado con cualquier algoritmo de análisis multi-resolución; este algoritmo parte de la implementación mediante DT-CWT para luego extenderlo a un método alternativo, el filtro bilateral. En cualquiera de los dos casos la metodología implica que la inyección de componentes pueda realizarse mediante diferentes alternativas. En el caso del algoritmo de evaluación de fusión se presenta un nuevo esquema que hace uso de procesos de clasificación, lo que permite evaluar los resultados del proceso de fusión de forma individual para cada tipo de cobertura de uso de suelo que se defina en el proceso de evaluación. Esta metodología permite complementar los procesos de evaluación tradicionales y puede facilitar el análisis del impacto de la fusión sobre determinadas clases de suelo. Finalmente, los algoritmos de detección de cambios propuestos abarcan dos enfoques. El primero está orientado a la obtención de mapas de sequía en datos multi-temporales a partir de índices espectrales. El segundo enfoque propone la utilización de un índice global de calidad espectral como filtro espacial. La utilización de dicho filtro facilita la comparación espectral global entre dos imágenes, esto unido a la utilización de umbrales, conlleva a la obtención de imágenes diferencia que contienen la información de cambio. ABSTRACT Remote sensing is a science relates to information gathering (spectral, spatial, temporal) about an object, area or phenomenon, through the analysis of data acquired by a device that is not in contact with the studied item. In general, data obtained from remote sensing to observe the earth’s surface are images, which are characterized by having a number of applications that are constantly evolving. Therefore, to solve the constant requirements of applications, new algorithms are proposed to improve or facilitate a particular process. With the purpose of developing these algorithms, each application needs mathematical methods, such as the multiresolution analysis which allows to analyze a signal at different scales. One of the options is the Dual Tree Complex Wavelet Transform (DT-CWT) which is implemented from two real filters and is characterized by invariance to translations. Among the advantages of this transform is its successful application in image fusion and change detection areas. In this regard, this thesis presents three algorithms applied to image fusion, assessment for image fusion and change detection in multitemporal images. For image fusion, it is presented a general outline that can be used with any multiresolution analysis technique; this algorithm is proposed at first with DT-CWT and then extends to an alternative method, the bilateral filter. In either case the method involves injection of components by various means. For fusion assessment, the proposal is focused on a scheme that uses classification processes, which allows evaluating merger results individually for each type of land use coverage that is defined in evaluation process. This methodology allows complementing traditional assessment processes and can facilitate impact analysis of the merger on certain kinds of soil. Finally, two approaches of change detection algorithms are included. The first is aimed at obtaining drought maps in multitemporal data from spectral indices. The second one takes a global index of spectral quality as a spatial filter. The use of this filter facilitates global spectral comparison between two images and by means of thresholding, allows imaging containing change information.
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
Los ataques a redes de información son cada vez más sofisticados y exigen una constante evolución y mejora de las técnicas de detección. Para ello, en este proyecto se ha diseñado e implementado una plataforma cooperativa para la detección de intrusiones basada en red. En primer lugar, se ha realizado un estudio teórico previo del marco tecnológico relacionado con este ámbito, en el que se describe y caracteriza el software que se utiliza para realizar ataques a sistemas (malware) así como los métodos que se utilizan para llegar a transmitir ese software (vectores de ataque). En el documento también se describen los llamados APT, que son ataques dirigidos con una gran inversión económica y temporal. Estos pueden englobar todos los malware y vectores de ataque existentes. Para poder evitar estos ataques, se estudiarán los sistemas de detección y prevención de intrusiones, describiendo brevemente los algoritmos que se tienden a utilizar en la actualidad. En segundo lugar, se ha planteado y desarrollado una plataforma en red dedicada al análisis de paquetes y conexiones para detectar posibles intrusiones. Este sistema está orientado a sistemas SCADA (Supervisory Control And Data Adquisition) aunque funciona sobre cualquier red IPv4/IPv6, para ello se definirá previamente lo que es un sistema SCADA, así como sus partes principales. Para implementar el sistema se han utilizado dispositivos de bajo consumo llamados Raspberry PI, estos se ubican entre la red y el equipo final que se quiera analizar. En ellos se ejecutan 2 aplicaciones desarrolladas de tipo cliente-servidor (la Raspberry central ejecutará la aplicación servidora y las esclavas la aplicación cliente) que funcionan de forma cooperativa utilizando la tecnología distribuida de Hadoop, la cual se explica previamente. Mediante esta tecnología se consigue desarrollar un sistema completamente escalable. La aplicación servidora muestra una interfaz gráfica que permite administrar la plataforma de análisis de forma centralizada, pudiendo ver así las alarmas de cada dispositivo y calificando cada paquete según su peligrosidad. El algoritmo desarrollado en la aplicación calcula el ratio de paquetes/tiempo que entran/salen del equipo final, procesando los paquetes y analizándolos teniendo en cuenta la información de señalización, creando diferentes bases de datos que irán mejorando la robustez del sistema, reduciendo así la posibilidad de ataques externos. Para concluir, el proyecto inicial incluía el procesamiento en la nube de la aplicación principal, pudiendo administrar así varias infraestructuras concurrentemente, aunque debido al trabajo extra necesario se ha dejado preparado el sistema para poder implementar esta funcionalidad. En el caso experimental actual el procesamiento de la aplicación servidora se realiza en la Raspberry principal, creando un sistema escalable, rápido y tolerante a fallos. ABSTRACT. The attacks to networks of information are increasingly sophisticated and demand a constant evolution and improvement of the technologies of detection. For this project it is developed and implemented a cooperative platform for detect intrusions based on networking. First, there has been a previous theoretical study of technological framework related to this area, which describes the software used for attacks on systems (malware) as well as the methods used in order to transmit this software (attack vectors). In this document it is described the APT, which are attacks directed with a big economic and time inversion. These can contain all existing malware and attack vectors. To prevent these attacks, intrusion detection systems and prevention intrusion systems will be discussed, describing previously the algorithms tend to use today. Secondly, a platform for analyzing network packets has been proposed and developed to detect possible intrusions in SCADA (Supervisory Control And Data Adquisition) systems. This platform is designed for SCADA systems (Supervisory Control And Data Acquisition) but works on any IPv4 / IPv6 network. Previously, it is defined what a SCADA system is and the main parts of it. To implement it, we used low-power devices called Raspberry PI, these are located between the network and the final device to analyze it. In these Raspberry run two applications client-server developed (the central Raspberry runs the server application and the slaves the client application) that work cooperatively using Hadoop distributed technology, which is previously explained. Using this technology is achieved develop a fully scalable system. The server application displays a graphical interface to manage analytics platform centrally, thereby we can see each device alarms and qualifying each packet by dangerousness. The algorithm developed in the application calculates the ratio of packets/time entering/leaving the terminal device, processing the packets and analyzing the signaling information of each packet, reating different databases that will improve the system, thereby reducing the possibility of external attacks. In conclusion, the initial project included cloud computing of the main application, being able to manage multiple concurrent infrastructure, but due to the extra work required has been made ready the system to implement this funcionality. In the current test case the server application processing is made on the main Raspberry, creating a scalable, fast and fault-tolerant system.
Resumo:
A chimeric retroviral vector (33E67) containing a CD33-specific single-chain antibody was generated in an attempt to target cells displaying the CD33 surface antigen. The chimeric envelope protein was translated, processed, and incorporated into viral particles as efficiently as wild-type envelope protein. The viral particles carrying the 33E67 envelope protein could bind efficiently to the CD33 receptor on target cells and were internalized, but no gene transfer occurred. A unique experimental approach was used to examine the basis for this postbinding block. Our data indicate that the chimeric envelope protein itself cannot participate in the fusion process, the most reasonable explanation being that this chimeric protein cannot undergo the appropriate conformational change that is thought to be triggered by receptor binding, a suggested prerequisite to subsequent fusion and core entry. These results indicate that the block to gene transfer in this system, and probably in most of the current chimeric retroviral vectors to date, is the inability of the chimeric envelope protein to undergo this obligatory conformational change.
Resumo:
In hippocampal neurons, neurotransmitter release can be regulated by protein kinase A (PKA) through a direct action on the secretory machinery. To identify the site of PKA modulation, we have taken advantage of the ability of the neurotoxin Botulinum A to cleave the synaptic protein SNAP-25. Cleavage of this protein decreases the Ca2+ responsiveness of the secretory machinery by partially uncoupling Ca2+-sensing from fusion per se. This is expressed as a shift toward higher Ca2+ levels of the Ca2+ to neurotransmitter release relationship and as a perturbation of synaptic delay under conditions where secretion induced by the Ca2+-independent secretagogue ruthenium red is unimpaired. We find that SNAP-25 cleavage also perturbs PKA-dependent modulation of secretion; facilitation of ruthenium red-evoked neurotransmitter release by the adenylyl cyclase activator forskolin is blocked completely after Botulinum toxin A action. Together with our observation that forskolin modifies the Ca2+ to neurotransmitter release relationship, our results suggest that SNAP-25 acts as a functional linker between Ca2+ detection and fusion and that PKA modulates an early step in the secretory machinery related to calcium sensing to facilitate synaptic transmission.
Resumo:
Primary CD8+ T cells from HIV+ asymptomatics can suppress virus production from CD4+ T cells acutely infected with either non-syncytia-inducing (NSI) or syncytia-inducing (SI) HIV-1 isolates. NSI strains of HIV-1 predominantly use the CCR5 chemokine receptor as a fusion cofactor, whereas fusion of T cell line-adapted SI isolates is mediated by another chemokine receptor, CXCR4. The CCR5 ligands RANTES (regulated on activation, normal T cell expressed and secreted), macrophage inflammatory protein 1α (MIP-1α), and MIP-1β are HIV-1 suppressive factors secreted by CD8+ cells that inhibit NSI viruses. Recently, the CXC chemokine stromal cell-derived factor 1 (SDF-1) was identified as a ligand for CXCR4 and shown to inhibit SI strains. We speculated that SDF-1 might be an effector molecule for CD8+ suppression of SI isolates and assessed several SDF-1 preparations for inhibition of HIV-1LAI-mediated cell–cell fusion, and examined levels of SDF-1 transcripts in CD8+ T cells. SDF-1 fusion inhibitory activity correlated with the N terminus, and the α and β forms of SDF-1 exhibited equivalent fusion blocking activity. SDF-1 preparations having the N terminus described by Bleul et al. (Bleul, C.C., Fuhlbrigge, R.C., Casasnovas, J.M., Aiuti, A. & Springer, T.A. (1996) J. Exp. Med. 184, 1101–1109) readily blocked HIV-1LAI-mediated fusion, whereas forms containing two or three additional N-terminal amino acids lacked this activity despite their ability to bind and/or signal through CXCR4. Though SDF-1 is constitutively expressed in most tissues, CD8 T cells contained extremely low levels of SDF-1 mRNA transcripts (<1 transcript/5,000 cells), and these levels did not correlate with virus suppressive activity. We conclude that suppression of SI strains of HIV-1 by CD8+ T cells is unlikely to involve SDF-1.
Resumo:
Brain capillary endothelial cells (BCECs) are targets of CD4-independent infection by HIV-1 and simian immunodeficiency virus (SIV) strains in vitro and in vivo. Infection of BCECs may provide a portal of entry for the virus into the central nervous system and could disrupt blood–brain barrier function, contributing to the development of AIDS dementia. We found that rhesus macaque BCECs express chemokine receptors involved in HIV and SIV entry including CCR5, CCR3, CXCR4, and STRL33, but not CCR2b, GPR1, or GPR15. Infection of BCECs by the neurovirulent strain SIV/17E-Fr was completely inhibited by aminooxypentane regulation upon activation, normal T cell expression and secretion in the presence or absence of ligands, but not by eotaxin or antibodies to CD4. We found that the envelope (env) proteins from SIV/17E-Fr and several additional SIV strains mediated cell–cell fusion and virus infection with CD4-negative, CCR5-positive cells. In contrast, fusion with cells expressing the coreceptors STRL33, GPR1, and GPR15 was CD4-dependent. These results show that CCR5 can serve as a primary receptor for SIV in BCECs and suggest a possible CD4-independent mechanism for blood–brain barrier disruption and viral entry into the central nervous system.
Resumo:
The Drosophila fusome is a germ cell-specific organelle assembled from membrane skeletal proteins and membranous vesicles. Mutational studies that have examined inactivating alleles of fusome proteins indicate that the organelle plays central roles in germ cell differentiation. Although mutations in genes encoding skeletal fusome components prevent proper cyst formation, mutations in the bag-of-marbles gene disrupt the assembly of membranous cisternae within the fusome and block cystoblast differentiation altogether. To understand the relationship between fusome cisternae and cystoblast differentiation, we have begun to identify other proteins in this network of fusome tubules. In this article we present evidence that the fly homologue of the transitional endoplasmic reticulum ATPase (TER94) is one such protein. The presence of TER94 suggests that the fusome cisternae grow by vesicle fusion and are a germ cell modification of endoplasmic reticulum. We also show that fusome association of TER94 is Bam-dependent, suggesting that cystoblast differentiation may be linked to fusome reticulum biogenesis.
Resumo:
The signal recognition particle (SRP) is a ribonucleoprotein composed of an Alu domain and an S domain. The S domain contains unique sequence SRP RNA and four SRP proteins: SRP19, SRP54, SRP68, and SRP72. SRP interacts with ribosomes to bring translating membrane and secreted proteins to the endoplasmic reticulum (ER) for proper processing. Additionally, SRP RNA is a member of a family of small nonribosomal RNAs found recently in the nucleolus, suggesting that the nucleolus is more plurifunctional than previously realized. It was therefore of interest to determine whether other SRP components localize to this intranuclear site. In transfected rat fibroblasts, green fluorescent protein fusions of SRP19, SRP68, and SRP72 localized to the nucleolus, as well as to the cytoplasm, as expected. SRP68 also accumulated in the ER, consistent with its affinity for the ER-bound SRP receptor. SRP54 was detected in the cytoplasm as a green fluorescent protein fusion and in immunofluorescence studies, but was not detected in the nucleolus. In situ hybridization experiments also revealed endogenous SRP RNA in the nucleolus. These results demonstrate that SRP RNA and three SRP proteins visit the nucleolus, suggesting that partial SRP assembly, or another unidentified activity of the SRP components, occurs at the nucleolus. SRP54 apparently interacts with nascent SRP beyond the nucleolus, consistent with in vitro reconstitution experiments showing that SRP19 must bind to SRP RNA before SRP54 binds. Our findings support the notion that the nucleolus is the site of assembly and/or interaction between the family of ribonucleoproteins involved in protein synthesis, in addition to ribosomes themselves.
Resumo:
We have demonstrated that the plasmalemmal vesicles (caveolae) of the continuous microvascular endothelium function as transcytotic vesicular carriers for protein molecules >20 Å and that transcytosis is an N-ethylmaleimide–sensitive factor (NSF)-dependent, N-ethylmaleimide-sensitive process. We have further investigated NSF interactions with endothelial proteins to find out 1) whether a complete set of fusion and targeting proteins is present in the endothelium; 2) whether they are organized in multimolecular complexes as in neurons; and 3) whether the endothelial multimolecular complexes differ from their neuronal counterparts, because of their specialized role in transcytosis. To generate the complexes, we have used myc-NSF, cultured pulmonary endothelial cells, and rat lung cytosol and membrane preparations; to detect them we have applied coimmunoprecipitation with myc antibodies; and to characterize them we have used velocity sedimentation and cross-linking procedures. We have found that both cytosolic and membrane fractions contain complexes that comprise beside soluble NSF attachment proteins and SNAREs (soluble NSF attachment protein receptor), rab 5, dynamin, caveolin, and lipids. By immunogold labeling and negative staining we have detected in these complexes, myc-NSF, syntaxin, dynamin, caveolin, and endogenous NSF. Similar complexes are formed by endogenous NSF. The results indicate that complexes with a distinct protein–lipid composition exist and suggest that they participate in targeting, fusion, and fission of caveolae with the endothelial plasmalemma.
Resumo:
Recombinant pox viruses have been generated for vaccination against heterologous pathogens. Amongst these, the following are notable examples. (i) The engineering of the Copenhagen strain of vaccinia virus to express the rabies virus glycoprotein. When applied in baits, this recombinant has been shown to vaccinate the red fox in Europe and raccoons in the United States, stemming the spread of rabies virus infection in the wild. (ii) A fowlpox-based recombinant expressing the Newcastle disease virus fusion and hemagglutinin glycoproteins has been shown to protect commercial broiler chickens for their lifetime when the vaccine was administered at 1 day of age, even in the presence of maternal immunity against either the Newcastle disease virus or the pox vector. (iii) Recombinants of canarypox virus, which is restricted for replication to avian species, have provided protection against rabies virus challenge in cats and dogs, against canine distemper virus, feline leukemia virus, and equine influenza virus disease. In humans, canarypox virus-based recombinants expressing antigens from rabies virus, Japanese encephalitis virus, and HIV have been shown to be safe and immunogenic. (iv) A highly attenuated vaccinia derivative, NYVAC, has been engineered to express antigens from both animal and human pathogens. Safety and immunogenicity of NYVAC-based recombinants expressing the rabies virus glycoprotein, a polyprotein from Japanese encephalitis virus, and seven antigens from Plasmodium falciparum have been demonstrated to be safe and immunogenic in early human vaccine studies.
Resumo:
Telomeres are specialized structures located at the ends of linear eukaryotic chromosomes that ensure their complete replication and protect them from fusion and degradation. We report here the characterization of the telomeres of the nematode Caenorhabditis elegans. We show that the chromosomes terminate in 4-9 kb of tandem repeats of the sequence TTAGGC. Furthermore, we have isolated clones corresponding to 11 of the 12 C. elegans telomeres. Their subtelomeric sequences are all different from each other, demonstrating that the terminal TTAGGC repeats are sufficient for general chromosomal capping functions. Finally, we demonstrate that the me8 meiotic mutant, which is defective in X chromosome crossing over and segregation, bears a terminal deficiency, that was healed by the addition of telomeric repeats, presumably by the activity of a telomerase enzyme. The 11 cloned telomeres represent an important advance for the completion of the physical map and for the determination of the entire sequence of the C. elegans genome.
Resumo:
gp330/megalin, a member of the low density lipoprotein (LDL) receptor gene family, is expressed on the apical surfaces of epithelial tissues, including the neuroepithelium, where it mediates the endocytic uptake of diverse macromolecules, such as cholesterol-carrying lipoproteins, proteases, and antiproteinases. Megalin knockout mice manifest abnormalities in epithelial tissues including lung and kidney that normally express the protein and they die perinatally from respiratory insufficiency. In brain, impaired proliferation of neuroepithelium produces a holoprosencephalic syndrome, characterized by lack of olfactory bulbs, forebrain fusion, and a common ventricular system. Similar syndromes in humans and animals are caused by insufficient supply of cholesterol during development. Because megalin can bind lipoproteins, we propose that the receptor is part of the maternal-fetal lipoprotein transport system and mediates the endocytic uptake of essential nutrients in the postgastrulation stage.