926 resultados para Business Value Two-Layer Model
Resumo:
Intermediate band formation on silicon layers for solar cell applications was achieved by titanium implantation and laser annealing. A two-layer heterogeneous system, formed by the implanted layer and by the un-implanted substrate, was formed. In this work, we present for the first time electrical characterization results which show that recombination is suppressed when the Ti concentration is high enough to overcome the Mott limit, in agreement with the intermediate band theory. Clear differences have been observed between samples implanted with doses under or over the Mott limit. Samples implanted under the Mott limit have capacitance values much lower than the un-implanted ones as corresponds to a highly doped semiconductor Schottky junction. However, when the Mott limit is surpassed, the samples have much higher capacitance, revealing that the intermediate band is formed. The capacitance increasing is due to the big amount of charge trapped at the intermediate band, even at low temperatures. Ti deep levels have been measured by admittance spectroscopy. These deep levels are located at energies which vary from 0.20 to 0.28?eV below the conduction band for implantation doses in the range 1013-1014 at./cm2. For doses over the Mott limit, the implanted atoms become nonrecombinant. Capacitance voltage transient technique measurements prove that the fabricated devices consist of two-layers, in which the implanted layer and the substrate behave as an n+/n junction.
Resumo:
The EHEA proposes a student-centered teaching model. Therefore, it seems necessary to actively involve the students in the teaching-learning process. Increasing the active participation of the students is not always easy in mathematical topics, since, when the students just enter the University, their ability to carry out autonomous mathematical work is scarce. In this paper we present some experiences related with the use of Computer Algebra Systems (CAS). All the experiences are designed in order to develop some mathematical competencies and mainly self-learning, the use of technology and team-work. The experiences include some teachers? proposals including: small projects to be executed in small groups, participation in competitions, the design of different CAS-Toolboxes, etc. The results obtained in the experiences, carried out with different groups of students from different engineering studies at different universities, makes us slightly optimistic about the educational value of the model.
Sistema de adquisición de datos para una aplicación de detección del ruido de reversa en tiempo real
Resumo:
Entre todas las fuentes de ruido, la activación de la propulsión en reversa de un avión después de aterrizar es conocida por las autoridades del aeropuerto como una causa importante de impacto acústico, molestias y quejas en las proximidades vecinas de los aeropuertos. Por ello, muchos de los aeropuertos de todo el mundo han establecido restricciones en el uso de la reversa, especialmente en las horas de la noche. Una forma de reducir el impacto acústico en las actividades aeroportuarias es implementar herramientas eficaces para la detección de ruido en reversa en los aeropuertos. Para este proyecto de fin de carrera, aplicando la metodología TREND (Thrust Reverser Noise Detection), se pretende desarrollar un sistema software capaz de determinar que una aeronave que aterrice en la pista active el frenado en reversa en tiempo real. Para el diseño de la aplicación se plantea un modelo software, que se compone de dos módulos: El módulo de adquisición de señales acústicas, simula un sistema de captación por señales de audio. Éste módulo obtiene muestra de señales estéreo de ficheros de audio de formato “.WAV” o del sistema de captación, para acondicionar las muestras de audio y enviarlas al siguiente módulo. El sistema de captación (array de micrófonos), se encuentra situado en una localización cercana a la pista de aterrizaje. El módulo de procesado busca los eventos de detección aplicando la metodología TREND con las muestras acústicas que recibe del módulo de adquisición. La metodología TREND describe la búsqueda de dos eventos sonoros llamados evento 1 (EV1) y evento 2 (EV2); el primero de ellos, es el evento que se activa cuando una aeronave aterriza discriminando otros eventos sonoros como despegues de aviones y otros sonidos de fondo, mientras que el segundo, se producirá después del evento 1, sólo cuando la aeronave utilice la reversa para frenar. Para determinar la detección del evento 1, es necesario discriminar las señales ajenas al aterrizaje aplicando un filtrado en la señal capturada, después, se aplicará un detector de umbral del nivel de presión sonora y por último, se determina la procedencia de la fuente de sonido con respecto al sistema de captación. En el caso de la detección del evento 2, está basada en la implementación de umbrales en la evolución temporal del nivel de potencia acústica aplicando el modelo de propagación inversa, con ayuda del cálculo de la estimación de la distancia en cada instante de tiempo mientras el avión recorre la pista de aterrizaje. Con cada aterrizaje detectado se realiza una grabación que se archiva en una carpeta específica y todos los datos adquiridos, son registrados por la aplicación software en un fichero de texto. ABSTRACT. Among all noise sources, the activation of reverse thrust to slow the aircraft after landing is considered as an important cause of noise pollution by the airport authorities, as well as complaints and annoyance in the airport´s nearby locations. Therefore, many airports around the globe have restricted the use of reverse thrust, especially during the evening hours. One way to reduce noise impact on airport activities is the implementation of effective tools that deal with reverse noise detection. This Final Project aims to the development of a software system capable of detecting if an aircraft landing on the runway activates reverse thrust on real time, using the TREND (Thrust Reverser Noise Detection) methodology. To design this application, a two modules model is proposed: • The acoustic signals obtainment module, which simulates an audio waves based catchment system. This module obtains stereo signal samples from “.WAV” audio files or the catchment system in order to prepare these audio samples and send them to the next module. The catchment system (a microphone array) is located on a place near the landing runway. • The processing module, which looks for detection events among the acoustic samples received from the other module, using the TREND methodology. The TREND methodology describes the search of two sounds events named event 1 (EV1) and event 2 (EV2). The first is the event activated by a landing plane, discriminating other sound events such as background noises or taking off planes; the second one will occur after event one only when the aircraft uses reverse to slow down. To determine event 1 detection, signals outside the landing must be discriminated using a filter on the catched signal. A pressure level´s threshold detector will be used on the signal afterwards. Finally, the origin of the sound source is determined regarding the catchment system. The detection of event 2 is based on threshold implementations in the temporal evolution of the acoustic power´s level by using the inverse propagation model and calculating the distance estimation at each time step while the plane goes on the landing runway. A recording is made every time a landing is detected, which is stored in a folder. All acquired data are registered by the software application on a text file.
Resumo:
Los recientes desarrollos tecnológicos permiten la transición de la oceanografía observacional desde un concepto basado en buques a uno basado en sistemas autónomos en red. Este último, propone que la forma más eficiente y efectiva de observar el océano es con una red de plataformas autónomas distribuidas espacialmente y complementadas con sistemas de medición remota. Debido a su maniobrabilidad y autonomía, los planeadores submarinos están jugando un papel relevante en este concepto de observaciones en red. Los planeadores submarinos fueron específicamente diseñados para muestrear vastas zonas del océano. Estos son robots con forma de torpedo que hacen uso de su forma hidrodinámica, alas y cambios de flotabilidad para generar movimientos horizontales y verticales en la columna de agua. Un sensor que mide conductividad, temperatura y profundidad (CTD) constituye un equipamiento estándar en la plataforma. Esto se debe a que ciertas variables dinámicas del Océano se pueden derivar de la temperatura, profundidad y salinidad. Esta última se puede estimar a partir de las medidas de temperatura y conductividad. La integración de sensores CTD en planeadores submarinos no esta exenta de desafíos. Uno de ellos está relacionado con la precisión de los valores de salinidad derivados de las muestras de temperatura y conductividad. Específicamente, las estimaciones de salinidad están significativamente degradadas por el retardo térmico existente, entre la temperatura medida y la temperatura real dentro de la celda de conductividad del sensor. Esta deficiencia depende de las particularidades del flujo de entrada al sensor, su geometría y, también se ha postulado, del calor acumulado en las capas de aislamiento externo del sensor. Los efectos del retardo térmico se suelen mitigar mediante el control del flujo de entrada al sensor. Esto se obtiene generalmente mediante el bombeo de agua a través del sensor o manteniendo constante y conocida su velocidad. Aunque recientemente se han incorporado sistemas de bombeo en los CTDs a bordo de los planeadores submarinos, todavía existen plataformas equipadas con CTDs sin dichos sistemas. En estos casos, la estimación de la salinidad supone condiciones de flujo de entrada al sensor, razonablemente controladas e imperturbadas. Esta Tesis investiga el impacto, si existe, que la hidrodinámica de los planeadores submarinos pudiera tener en la eficiencia de los sensores CTD. Específicamente, se investiga primero la localización del sensor CTD (externo al fuselaje) relativa a la capa límite desarrollada a lo largo del cuerpo del planeador. Esto se lleva a cabo mediante la utilización de un modelo acoplado de fluido no viscoso con un modelo de capa límite implementado por el autor, así como mediante un programa comercial de dinámica de fluidos computacional (CFD). Los resultados indican, en ambos casos, que el sensor CTD se encuentra fuera de la capa límite, siendo las condiciones del flujo de entrada las mismas que las del flujo sin perturbar. Todavía, la velocidad del flujo de entrada al sensor CTD es la velocidad de la plataforma, la cual depende de su hidrodinámica. Por tal motivo, la investigación se ha extendido para averiguar el efecto que la velocidad de la plataforma tiene en la eficiencia del sensor CTD. Con este propósito, se ha desarrollado un modelo en elementos finitos del comportamiento hidrodinámico y térmico del flujo dentro del CTD. Los resultados numéricos indican que el retardo térmico, atribuidos originalmente a la acumulación de calor en la estructura del sensor, se debe fundamentalmente a la interacción del flujo que atraviesa la celda de conductividad con la geometría interna de la misma. Esta interacción es distinta a distintas velocidades del planeador submarino. Específicamente, a velocidades bajas del planeador (0.2 m/s), la mezcla del flujo entrante con las masas de agua remanentes en el interior de la celda, se ralentiza debido a la generación de remolinos. Se obtienen entonces desviaciones significantes entre la salinidad real y aquella estimada. En cambio, a velocidades más altas del planeador (0.4 m/s) los procesos de mezcla se incrementan debido a la turbulencia e inestabilidades. En consecuencia, la respuesta del sensor CTD es mas rápida y las estimaciones de la salinidad mas precisas que en el caso anterior. Para completar el trabajo, los resultados numéricos se han validado con pruebas experimentales. Específicamente, se ha construido un modelo a escala del sensor CTD para obtener la confirmación experimental de los modelos numéricos. Haciendo uso del principio de similaridad de la dinámica que gobierna los fluidos incompresibles, los experimentos se han realizado con flujos de aire. Esto simplifica significativamente la puesta experimental y facilita su realización en condiciones con medios limitados. Las pruebas experimentales han confirmado cualitativamente los resultados numéricos. Más aun, se sugiere en esta Tesis que la respuesta del sensor CTD mejoraría significativamente añadiendo un generador de turbulencia en localizaciones adecuadas al interno de la celda de conductividad. ABSTRACT Recent technological developments allow the transition of observational oceanography from a ship-based to a networking concept. The latter suggests that the most efficient and effective way to observe the Ocean is through a fleet of spatially distributed autonomous platforms complemented by remote sensing. Due to their maneuverability, autonomy and endurance at sea, underwater gliders are already playing a significant role in this networking observational approach. Underwater gliders were specifically designed to sample vast areas of the Ocean. These are robots with a torpedo shape that make use of their hydrodynamic shape, wings and buoyancy changes to induce horizontal and vertical motions through the water column. A sensor to measure the conductivity, temperature and depth (CTD) is a standard payload of this platform. This is because certain ocean dynamic variables can be derived from temperature, depth and salinity. The latter can be inferred from measurements of temperature and conductivity. Integrating CTD sensors in glider platforms is not exempted of challenges. One of them, concerns to the accuracy of the salinity values derived from the sampled conductivity and temperature. Specifically, salinity estimates are significantly degraded by the thermal lag response existing between the measured temperature and the real temperature inside the conductivity cell of the sensor. This deficiency depends on the particularities of the inflow to the sensor, its geometry and, it has also been hypothesized, on the heat accumulated by the sensor coating layers. The effects of thermal lag are usually mitigated by controlling the inflow conditions through the sensor. Controlling inflow conditions is usually achieved by pumping the water through the sensor or by keeping constant and known its diving speed. Although pumping systems have been recently implemented in CTD sensors on board gliders, there are still platforms with unpumped CTDs. In the latter case, salinity estimates rely on assuming reasonable controlled and unperturbed flow conditions at the CTD sensor. This Thesis investigates the impact, if any, that glider hydrodynamics may have on the performance of onboard CTDs. Specifically, the location of the CTD sensor (external to the hull) relative to the boundary layer developed along the glider fuselage, is first investigated. This is done, initially, by applying a coupled inviscid-boundary layer model developed by the author, and later by using a commercial software for computational fluid dynamics (CFD). Results indicate, in both cases, that the CTD sensor is out of the boundary layer, being its inflow conditions those of the free stream. Still, the inflow speed to the CTD sensor is the speed of the platform, which largely depends on its hydrodynamic setup. For this reason, the research has been further extended to investigate the effect of the platform speed on the performance of the CTD sensor. A finite element model of the hydrodynamic and thermal behavior of the flow inside the CTD sensor, is developed for this purpose. Numerical results suggest that the thermal lag effect is mostly due to the interaction of the flow through the conductivity cell and its geometry. This interaction is different at different speeds of the glider. Specifically, at low glider speeds (0.2 m/s), the mixing of recent and old waters inside the conductivity cell is slowed down by the generation of coherent eddy structures. Significant departures between real and estimated values of the salinity are found. Instead, mixing is enhanced by turbulence and instabilities for high glider speeds (0.4 m/s). As a result, the thermal response of the CTD sensor is faster and the salinity estimates more accurate than for the low speed case. For completeness, numerical results have been validated against model tests. Specifically, a scaled model of the CTD sensor was built to obtain experimental confirmation of the numerical results. Making use of the similarity principle of the dynamics governing incompressible fluids, experiments are carried out with air flows. This significantly simplifies the experimental setup and facilitates its realization in a limited resource condition. Model tests qualitatively confirm the numerical findings. Moreover, it is suggested in this Thesis that the response of the CTD sensor would be significantly improved by adding small turbulators at adequate locations inside the conductivity cell.
Resumo:
Este proyecto se incluye en una línea de trabajo que tiene como objetivo final la optimización de la energía consumida por un dispositivo portátil multimedia mediante la aplicación de técnicas de control realimentado, a partir de una modificación dinámica de la frecuencia de trabajo del procesador y de su tensión de alimentación. La modificación de frecuencia y tensión se realiza a partir de la información de realimentación acerca de la potencia consumida por el dispositivo, lo que supone un problema ya que no suele ser posible la monitorización del consumo de potencia en dispositivos de estas características. Este es el motivo por el que se recurre a la estimación del consumo de potencia, utilizando para ello un modelo de predicción. A partir del número de veces que se producen ciertos eventos en el procesador del dispositivo, el modelo de predicción es capaz de obtener una estimación de la potencia consumida por dicho dispositivo. El trabajo llevado a cabo en este proyecto se centra en la implementación de un modelo de estimación de potencia en el kernel de Linux. La razón por la que la estimación se implementa en el sistema operativo es, en primer lugar para lograr un acceso directo a los contadores del procesador. En segundo lugar, para facilitar la modificación de frecuencia y tensión, una vez obtenida la estimación de potencia, ya que esta también se realiza desde el sistema operativo. Otro motivo para implementar la estimación en el sistema operativo, es que la estimación debe ser independiente de las aplicaciones de usuario. Además, el proceso de estimación se realiza de forma periódica, lo que sería difícil de lograr si no se trabajase desde el sistema operativo. Es imprescindible que la estimación se haga de forma periódica ya que al ser dinámica la modificación de frecuencia y tensión que se pretende implementar, se necesita conocer el consumo de potencia del dispositivo en todo momento. Cabe destacar también, que los algoritmos de control se tienen que diseñar sobre un patrón periódico de actuación. El modelo de estimación de potencia funciona de manera específica para el perfil de consumo generado por una única aplicación determinada, que en este caso es un decodificador de vídeo. Sin embargo, es necesario que funcione de la forma más precisa posible para cada una de las frecuencias de trabajo del procesador, y para el mayor número posible de secuencias de vídeo. Esto es debido a que las sucesivas estimaciones de potencia se pretenden utilizar para llevar a cabo la modificación dinámica de frecuencia, por lo que el modelo debe ser capaz de continuar realizando las estimaciones independientemente de la frecuencia con la que esté trabajando el dispositivo. Para valorar la precisión del modelo de estimación se toman medidas de la potencia consumida por el dispositivo a las distintas frecuencias de trabajo durante la ejecución del decodificador de vídeo. Estas medidas se comparan con las estimaciones de potencia obtenidas durante esas mismas ejecuciones, obteniendo de esta forma el error de predicción cometido por el modelo y realizando las modificaciones y ajustes oportunos en el mismo. ABSTRACT. This project is included in a work line which tries to optimize consumption of handheld multimedia devices by the application of feedback control techniques, from a dynamic modification of the processor work frequency and its voltage. The frequency and voltage modification is performed depending on the feedback information about the device power consumption. This is a problem because normally it is not possible to monitor the power consumption on this kind of devices. This is the reason why a power consumption estimation is used instead, which is obtained from a prediction model. Using the number of times some events occur on the device processor, the prediction model is able to obtain a power consumption estimation of this device. The work done in this project focuses on the implementation of a power estimation model in the Linux kernel. The main reason to implement the estimation in the operating system is to achieve a direct access to the processor counters. The second reason is to facilitate the frequency and voltage modification, because this modification is also done from the operating system. Another reason to implement the estimation in the operating system is because the estimation must be done apart of the user applications. Moreover, the estimation process is done periodically, what is difficult to obtain outside the operating system. It is necessary to make the estimation in a periodic way because the frequency and voltage modification is going to be dynamic, so it needs to know the device power consumption at every time. Also, it is important to say that the control algorithms have to be designed over a periodic pattern of action. The power estimation model works specifically for the consumption profile generated by a single application, which in this case is a video decoder. Nevertheless, it is necessary that the model works as accurate as possible for each frequency available on the processor, and for the greatest number of video sequences. This is because the power estimations are going to be used to modify dynamically the frequency, so the model must be able to work independently of the device frequency. To value the estimation model precision, some measurements of the device power consumption are taken at different frequencies during the video decoder execution. These measurements are compared with the power estimations obtained during that execution, getting the prediction error committed by the model, and if it is necessary, making modifications and settings on this model.
Resumo:
Although humanity depends on the continued, aggregate functioning of natural ecosystems, few studies have explored the impact of community structure on the stability of aggregate community properties. Here we derive the stability of the aggregate property of community biomass as a function of species’ competition coefficients for a two-species model. The model predicts that the stability of community biomass is relatively independent of the magnitude of the interaction strengths. Instead, the degree of asymmetry of the interactions appears to be key to community stability.
Resumo:
An essential component of regulated steroidogenesis is the translocation of cholesterol from the cytoplasm to the inner mitochondrial membrane where the cholesterol side-chain cleavage enzyme carries out the first committed step in steroidogenesis. Recent studies showed that a 30-kDa mitochondrial phosphoprotein, designated steroidogenic acute regulatory protein (StAR), is essential for this translocation. To allow us to explore the roles of StAR in a system amenable to experimental manipulation and to develop an animal model for the human disorder lipoid congenital adrenal hyperplasia (lipoid CAH), we used targeted gene disruption to produce StAR knockout mice. These StAR knockout mice were indistinguishable initially from wild-type littermates, except that males and females had female external genitalia. After birth, they failed to grow normally and died from adrenocortical insufficiency. Hormone assays confirmed severe defects in adrenal steroids—with loss of negative feedback regulation at hypothalamic–pituitary levels—whereas hormones constituting the gonadal axis did not differ significantly from levels in wild-type littermates. Histologically, the adrenal cortex of StAR knockout mice contained florid lipid deposits, with lesser deposits in the steroidogenic compartment of the testis and none in the ovary. The sex-specific differences in gonadal involvement support a two-stage model of the pathogenesis of StAR deficiency, with trophic hormone stimulation inducing progressive accumulation of lipids within the steroidogenic cells and ultimately causing their death. These StAR knockout mice provide a useful model system in which to determine the mechanisms of StAR’s essential roles in adrenocortical and gonadal steroidogenesis.
Resumo:
Threshold mechanisms of transcriptional activation are thought to be critical for translating continuous gradients of extracellular signals into discrete all-or-none cellular responses, such as mitogenesis and differentiation. Indeed, unequivocal evidence for a graded transcriptional response in which the concentration of inducer directly correlates with the level of gene expression in individual eukaryotic cells is lacking. By using a novel binary tetracycline regulatable retroviral vector system, we observed a graded rather than a threshold mechanism of transcriptional activation in two different model systems. When polyclonal populations of cells were analyzed at the single cell level, a dose-dependent, stepwise increase in expression of the reporter gene, green fluorescent protein (GFP), was observed by fluorescence-activated cell sorting. These data provide evidence that, in addition to the generally observed all-or-none switch, the basal transcription machinery also can respond proportionally to changes in concentration of extracellular inducers and trancriptional activators.
Resumo:
Recent advances in single molecule manipulation methods offer a novel approach to investigating the protein folding problem. These studies usually are done on molecules that are naturally organized as linear arrays of globular domains. To extend these techniques to study proteins that normally exist as monomers, we have developed a method of synthesizing polymers of protein molecules in the solid state. By introducing cysteines at locations where bacteriophage T4 lysozyme molecules contact each other in a crystal and taking advantage of the alignment provided by the lattice, we have obtained polymers of defined polarity up to 25 molecules long that retain enzymatic activity. These polymers then were manipulated mechanically by using a modified scanning force microscope to characterize the force-induced reversible unfolding of the individual lysozyme molecules. This approach should be general and adaptable to many other proteins with known crystal structures. For T4 lysozyme, the force required to unfold the monomers was 64 ± 16 pN at the pulling speed used. Refolding occurred within 1 sec of relaxation with an efficiency close to 100%. Analysis of the force versus extension curves suggests that the mechanical unfolding transition follows a two-state model. The unfolding forces determined in 1 M guanidine hydrochloride indicate that in these conditions the activation barrier for unfolding is reduced by 2 kcal/mol.
Resumo:
The Mg-chelation is found to be a prerequisite to direct protoporphyrin IX into the chlorophyll (Chl)-synthesizing branch of the tetrapyrrol pathway. The ATP-dependent insertion of magnesium into protoporphyrin IX is catalyzed by the enzyme Mg-chelatase, which consists of three protein subunits (CHL D, CHL I, and CHL H). We have chosen the Mg-chelatase from tobacco to obtain more information about the mode of molecular action of this complex enzyme by elucidating the interactions in vitro and in vivo between the central subunit CHL D and subunits CHL I and CHL H. We dissected CHL D in defined peptide fragments and assayed for the essential part of CHL D for protein–protein interaction and enzyme activity. Surprisingly, only a small part of CHL D, i.e., 110 aa, was required for interaction with the partner subunits and maintenance of the enzyme activity. In addition, it could be demonstrated that CHL D is capable of forming homodimers. Moreover, it interacted with both CHL I and CHL H. Our data led to the outline of a two-step model based on the cooperation of the subunits for the chelation process.
Resumo:
Postmortem prefrontal cortices (PFC) (Brodmann’s areas 10 and 46), temporal cortices (Brodmann’s area 22), hippocampi, caudate nuclei, and cerebella of schizophrenia patients and their matched nonpsychiatric subjects were compared for reelin (RELN) mRNA and reelin (RELN) protein content. In all of the brain areas studied, RELN and its mRNA were significantly reduced (≈50%) in patients with schizophrenia; this decrease was similar in patients affected by undifferentiated or paranoid schizophrenia. To exclude possible artifacts caused by postmortem mRNA degradation, we measured the mRNAs in the same PFC extracts from γ-aminobutyric acid (GABA)A receptors α1 and α5 and nicotinic acetylcholine receptor α7 subunits. Whereas the expression of the α7 nicotinic acetylcholine receptor subunit was normal, that of the α1 and α5 receptor subunits of GABAA was increased when schizophrenia was present. RELN mRNA was preferentially expressed in GABAergic interneurons of PFC, temporal cortex, hippocampus, and glutamatergic granule cells of cerebellum. A protein putatively functioning as an intracellular target for the signal-transduction cascade triggered by RELN protein released into the extracellular matrix is termed mouse disabled-1 (DAB1) and is expressed at comparable levels in the neuroplasm of the PFC and hippocampal pyramidal neurons, cerebellar Purkinje neurons of schizophrenia patients, and nonpsychiatric subjects; these three types of neurons do not express RELN protein. In the same samples of temporal cortex, we found a decrease in RELN protein of ≈50% but no changes in DAB1 protein expression. We also observed a large (up to 70%) decrease of GAD67 but only a small decrease of GAD65 protein content. These findings are interpreted within a neurodevelopmental/vulnerability “two-hit” model for the etiology of schizophrenia.
Resumo:
Mucopolysaccharidosis type VII (MPS VII; Sly syndrome) is an autosomal recessive lysosomal storage disorder due to an inherited deficiency of β-glucuronidase. A naturally occurring mouse model for this disease was discovered at The Jackson Laboratory and shown to be due to homozygosity for a 1-bp deletion in exon 10 of the gus gene. The murine model MPS VII (gusmps/mps) has been very well characterized and used extensively to evaluate experimental strategies for lysosomal storage diseases, including bone marrow transplantation, enzyme replacement therapy, and gene therapy. To enhance the value of this model for enzyme and gene therapy, we produced a transgenic mouse expressing the human β-glucuronidase cDNA with an amino acid substitution at the active site nucleophile (E540A) and bred it onto the MPS VII (gusmps/mps) background. We demonstrate here that the mutant mice bearing the active site mutant human transgene retain the clinical, morphological, biochemical, and histopathological characteristics of the original MPS VII (gusmps/mps) mouse. However, they are now tolerant to immune challenge with human β-glucuronidase. This “tolerant MPS VII mouse model” should be useful for preclinical trials evaluating the effectiveness of enzyme and/or gene therapy with the human gene products likely to be administered to human patients with MPS VII.
Resumo:
We have used 19F NMR to analyze the metal ion-induced folding of the hammerhead ribozyme by selective incorporation of 5fluorouridine. We have studied the chemical shift and linewidths of 19F resonances of 5-fluorouridine at the 4 and 7 positions in the ribozyme core as a function of added Mg2+. The data fit well to a simple two-state model whereby the formation of domain 1 is induced by the noncooperative binding of Mg2+ with an association constant in the range of 100 to 500 M−1, depending on the concentration of monovalent ions present. The results are in excellent agreement with data reporting on changes in the global shape of the ribozyme. However, the NMR experiments exploit reporters located in the center of the RNA sections undergoing the folding transitions, thereby allowing the assignment of specific nucleotides to the separate stages. The results define the folding pathway at high resolution and provide a time scale for the first transition in the millisecond range.
Resumo:
Intramolecular chain diffusion is an elementary process in the conformational fluctuations of the DNA hairpin-loop. We have studied the temperature and viscosity dependence of a model DNA hairpin-loop by FRET (fluorescence resonance energy transfer) fluctuation spectroscopy (FRETfs). Apparent thermodynamic parameters were obtained by analyzing the correlation amplitude through a two-state model and are consistent with steady-state fluorescence measurements. The kinetics of closing the loop show non-Arrhenius behavior, in agreement with theoretical prediction and other experimental measurements on peptide folding. The fluctuation rates show a fractional power dependence (β = 0.83) on the solution viscosity. A much slower intrachain diffusion coefficient in comparison to that of polypeptides was derived based on the first passage time theory of SSS [Szabo, A., Schulten, K. & Schulten, Z. (1980) J. Chem. Phys. 72, 4350–4357], suggesting that intrachain interactions, especially stacking interaction in the loop, might increase the roughness of the free energy surface of the DNA hairpin-loop.
Resumo:
To quantitatively investigate the trafficking of the transmembrane lectin VIP36 and its relation to cargo-containing transport carriers (TCs), we analyzed a C-terminal fluorescent-protein (FP) fusion, VIP36-SP-FP. When expressed at moderate levels, VIP36-SP-FP localized to the endoplasmic reticulum, Golgi apparatus, and intermediate transport structures, and colocalized with epitope-tagged VIP36. Temperature shift and pharmacological experiments indicated VIP36-SP-FP recycled in the early secretory pathway, exhibiting trafficking representative of a class of transmembrane cargo receptors, including the closely related lectin ERGIC53. VIP36-SP-FP trafficking structures comprised tubules and globular elements, which translocated in a saltatory manner. Simultaneous visualization of anterograde secretory cargo and VIP36-SP-FP indicated that the globular structures were pre-Golgi carriers, and that VIP36-SP-FP segregated from cargo within the Golgi and was not included in post-Golgi TCs. Organelle-specific bleach experiments directly measured the exchange of VIP36-SP-FP between the Golgi and endoplasmic reticulum (ER). Fitting a two-compartment model to the recovery data predicted first order rate constants of 1.22 ± 0.44%/min for ER → Golgi, and 7.68 ± 1.94%/min for Golgi → ER transport, revealing a half-time of 113 ± 70 min for leaving the ER and 1.67 ± 0.45 min for leaving the Golgi, and accounting for the measured steady-state distribution of VIP36-SP-FP (13% Golgi/87% ER). Perturbing transport with AlF4− treatment altered VIP36-SP-GFP distribution and changed the rate constants. The parameters of the model suggest that relatively small differences in the first order rate constants, perhaps manifested in subtle differences in the tendency to enter distinct TCs, result in large differences in the steady-state localization of secretory components.