932 resultados para ES-SAGD. Heavy oil. Recovery factor. Reservoir modeling and simulation
Resumo:
Heavy-mineral analyses were made for 39 samples, 27 from DSDP Site 445 and 12 from Site 446. About one-fourth of the samples were so loose that they were easily disaggregated in water. The amount of heavy residue and the magnetite content of the heavy fraction were very high, 0.2 to 44 per cent and (on the average) more than 20 per cent, respectively. Among the non-opaque heavy minerals, common hornblende (0 to 80%) and augite (0 to 98%) are most abundant. Pale-green and bluish-green amphiboles (around 10%) and the epidote group (a few to 48%) are next in abundance. Euhedral apatite and biotite and irregularly shaped chromite are not abundant, but are present throughout the sequence. Hacksaw structure is developed in pale-green amphibole and augite. At Site 445, a fair amount of chlorite and a few glauconite(?) grains are present from Core 445-81 downward. The content of common hornblende and opaque minerals also changes from Core 445-81 downward. A geological boundary may exist between Cores 445-77 and 445-81. Source rocks of the sediments at both sites were basaltic volcanic rocks (possibly alkali suite), schists, and ultramafic rocks. The degree of lithification and amount of heavy residue, and the content of magnetite, non-opaque heavy minerals (excluding mafic minerals), and mafic minerals in the cores were compared with Eocene, Oligocene, and Miocene sandstones of southwest Japan. In many respects, the sediments at Sites 445 and 446 are quite different from those of southwest Japan. From the early Eocene to the early Miocene, the area of these sites belonged to a different geologic province than southwest Japan.
Resumo:
The consideration of real operating conditions for the design and optimization of a multijunction solar cell receiver-concentrator assembly is indispensable. Such a requirement involves the need for suitable modeling and simulation tools in order to complement the experimental work and circumvent its well-known burdens and restrictions. Three-dimensional distributed models have been demonstrated in the past to be a powerful choice for the analysis of distributed phenomena in single- and dual-junction solar cells, as well as for the design of strategies to minimize the solar cell losses when operating under high concentrations. In this paper, we present the application of these models for the analysis of triple-junction solar cells under real operating conditions. The impact of different chromatic aberration profiles on the short-circuit current of triple-junction solar cells is analyzed in detail using the developed distributed model. Current spreading conditions the impact of a given chromatic aberration profile on the solar cell I-V curve. The focus is put on determining the role of current spreading in the connection between photocurrent profile, subcell voltage and current, and semiconductor layers sheet resistance.
Resumo:
Nanoinformatics has recently emerged to address the need of computing applications at the nano level. In this regard, the authors have participated in various initiatives to identify its concepts, foundations and challenges. While nanomaterials open up the possibility for developing new devices in many industrial and scientific areas, they also offer breakthrough perspectives for the prevention, diagnosis and treatment of diseases. In this paper, we analyze the different aspects of nanoinformatics and suggest five research topics to help catalyze new research and development in the area, particularly focused on nanomedicine. We also encompass the use of informatics to further the biological and clinical applications of basic research in nanoscience and nanotechnology, and the related concept of an extended ?nanotype? to coalesce information related to nanoparticles. We suggest how nanoinformatics could accelerate developments in nanomedicine, similarly to what happened with the Human Genome and other -omics projects, on issues like exchanging modeling and simulation methods and tools, linking toxicity information to clinical and personal databases or developing new approaches for scientific ontologies, among many others.
Resumo:
The network mobility (NEMO) is proposed to support the mobility management when users move as a whole. In IP Multimedia Subsystem (IMS), the individual Quality of Service (QoS) control for NEMO results in excessive signaling cost. On the other hand, current QoS schemes have two drawbacks: unawareness of the heterogeneous wireless environment and inefficient utilization of the reserved bandwidth. To solve these problems, we present a novel heterogeneous bandwidth sharing (HBS) scheme for QoS provision under IMS-based NEMO (IMS-NEMO). The HBS scheme selects the most suitable access network for each session and enables the new coming non-real-time sessions to share bandwidth with the Variable Bit Rate (VBR) coded media flows. The modeling and simulation results demonstrate that the HBS can satisfy users' QoS requirement and obtain a more efficient use of the scarce wireless bandwidth.
Resumo:
Over a decade ago, nanotechnologists began research on applications of nanomaterials for medicine. This research has revealed a wide range of different challenges, as well as many opportunities. Some of these challenges are strongly related to informatics issues, dealing, for instance, with the management and integration of heterogeneous information, defining nomenclatures, taxonomies and classifications for various types of nanomaterials, and research on new modeling and simulation techniques for nanoparticles. Nanoinformatics has recently emerged in the USA and Europe to address these issues. In this paper, we present a review of nanoinformatics, describing its origins, the problems it addresses, areas of interest, and examples of current research initiatives and informatics resources. We suggest that nanoinformatics could accelerate research and development in nanomedicine, as has occurred in the past in other fields. For instance, biomedical informatics served as a fundamental catalyst for the Human Genome Project, and other genomic and ?omics projects, as well as the translational efforts that link resulting molecular-level research to clinical problems and findings.
Resumo:
Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts.
Resumo:
Mechanical degradation of tungsten alloys at extreme temperatures in vacuum and oxidation atmospheres.
Design and Simulation of Deep Nanometer SRAM Cells under Energy, Mismatch, and Radiation Constraints
Resumo:
La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.
Resumo:
La metodología Integrated Safety Analysis (ISA), desarrollada en el área de Modelación y Simulación (MOSI) del Consejo de Seguridad Nuclear (CSN), es un método de Análisis Integrado de Seguridad que está siendo evaluado y analizado mediante diversas aplicaciones impulsadas por el CSN; el análisis integrado de seguridad, combina las técnicas evolucionadas de los análisis de seguridad al uso: deterministas y probabilistas. Se considera adecuado para sustentar la Regulación Informada por el Riesgo (RIR), actual enfoque dado a la seguridad nuclear y que está siendo desarrollado y aplicado en todo el mundo. En este contexto se enmarcan, los proyectos Safety Margin Action Plan (SMAP) y Safety Margin Assessment Application (SM2A), impulsados por el Comité para la Seguridad de las Instalaciones Nucleares (CSNI) de la Agencia de la Energía Nuclear (NEA) de la Organización para la Cooperación y el Desarrollo Económicos (OCDE) en el desarrollo del enfoque adecuado para el uso de las metodologías integradas en la evaluación del cambio en los márgenes de seguridad debidos a cambios en las condiciones de las centrales nucleares. El comité constituye un foro para el intercambio de información técnica y de colaboración entre las organizaciones miembro, que aportan sus propias ideas en investigación, desarrollo e ingeniería. La propuesta del CSN es la aplicación de la metodología ISA, especialmente adecuada para el análisis según el enfoque desarrollado en el proyecto SMAP que pretende obtener los valores best-estimate con incertidumbre de las variables de seguridad que son comparadas con los límites de seguridad, para obtener la frecuencia con la que éstos límites son superados. La ventaja que ofrece la ISA es que permite el análisis selectivo y discreto de los rangos de los parámetros inciertos que tienen mayor influencia en la superación de los límites de seguridad, o frecuencia de excedencia del límite, permitiendo así evaluar los cambios producidos por variaciones en el diseño u operación de la central que serían imperceptibles o complicados de cuantificar con otro tipo de metodologías. La ISA se engloba dentro de las metodologías de APS dinámico discreto que utilizan la generación de árboles de sucesos dinámicos (DET) y se basa en la Theory of Stimulated Dynamics (TSD), teoría de fiabilidad dinámica simplificada que permite la cuantificación del riesgo de cada una de las secuencias. Con la ISA se modelan y simulan todas las interacciones relevantes en una central: diseño, condiciones de operación, mantenimiento, actuaciones de los operadores, eventos estocásticos, etc. Por ello requiere la integración de códigos de: simulación termohidráulica y procedimientos de operación; delineación de árboles de sucesos; cuantificación de árboles de fallos y sucesos; tratamiento de incertidumbres e integración del riesgo. La tesis contiene la aplicación de la metodología ISA al análisis integrado del suceso iniciador de la pérdida del sistema de refrigeración de componentes (CCWS) que genera secuencias de pérdida de refrigerante del reactor a través de los sellos de las bombas principales del circuito de refrigerante del reactor (SLOCA). Se utiliza para probar el cambio en los márgenes, con respecto al límite de la máxima temperatura de pico de vaina (1477 K), que sería posible en virtud de un potencial aumento de potencia del 10 % en el reactor de agua a presión de la C.N. Zion. El trabajo realizado para la consecución de la tesis, fruto de la colaboración de la Escuela Técnica Superior de Ingenieros de Minas y Energía y la empresa de soluciones tecnológicas Ekergy Software S.L. (NFQ Solutions) con el área MOSI del CSN, ha sido la base para la contribución del CSN en el ejercicio SM2A. Este ejercicio ha sido utilizado como evaluación del desarrollo de algunas de las ideas, sugerencias, y los algoritmos detrás de la metodología ISA. Como resultado se ha obtenido un ligero aumento de la frecuencia de excedencia del daño (DEF) provocado por el aumento de potencia. Este resultado demuestra la viabilidad de la metodología ISA para obtener medidas de las variaciones en los márgenes de seguridad que han sido provocadas por modificaciones en la planta. También se ha mostrado que es especialmente adecuada en escenarios donde los eventos estocásticos o las actuaciones de recuperación o mitigación de los operadores pueden tener un papel relevante en el riesgo. Los resultados obtenidos no tienen validez más allá de la de mostrar la viabilidad de la metodología ISA. La central nuclear en la que se aplica el estudio está clausurada y la información relativa a sus análisis de seguridad es deficiente, por lo que han sido necesarias asunciones sin comprobación o aproximaciones basadas en estudios genéricos o de otras plantas. Se han establecido tres fases en el proceso de análisis: primero, obtención del árbol de sucesos dinámico de referencia; segundo, análisis de incertidumbres y obtención de los dominios de daño; y tercero, cuantificación del riesgo. Se han mostrado diversas aplicaciones de la metodología y ventajas que presenta frente al APS clásico. También se ha contribuido al desarrollo del prototipo de herramienta para la aplicación de la metodología ISA (SCAIS). ABSTRACT The Integrated Safety Analysis methodology (ISA), developed by the Consejo de Seguridad Nuclear (CSN), is being assessed in various applications encouraged by CSN. An Integrated Safety Analysis merges the evolved techniques of the usually applied safety analysis methodologies; deterministic and probabilistic. It is considered as a suitable tool for assessing risk in a Risk Informed Regulation framework, the approach under development that is being adopted on Nuclear Safety around the world. In this policy framework, the projects Safety Margin Action Plan (SMAP) and Safety Margin Assessment Application (SM2A), set up by the Committee on the Safety of Nuclear Installations (CSNI) of the Nuclear Energy Agency within the Organization for Economic Co-operation and Development (OECD), were aimed to obtain a methodology and its application for the integration of risk and safety margins in the assessment of the changes to the overall safety as a result of changes in the nuclear plant condition. The committee provides a forum for the exchange of technical information and cooperation among member organizations which contribute their respective approaches in research, development and engineering. The ISA methodology, proposed by CSN, specially fits with the SMAP approach that aims at obtaining Best Estimate Plus Uncertainty values of the safety variables to be compared with the safety limits. This makes it possible to obtain the exceedance frequencies of the safety limit. The ISA has the advantage over other methods of allowing the specific and discrete evaluation of the most influential uncertain parameters in the limit exceedance frequency. In this way the changes due to design or operation variation, imperceptibles or complicated to by quantified by other methods, are correctly evaluated. The ISA methodology is one of the discrete methodologies of the Dynamic PSA framework that uses the generation of dynamic event trees (DET). It is based on the Theory of Stimulated Dynamics (TSD), a simplified version of the theory of Probabilistic Dynamics that allows the risk quantification. The ISA models and simulates all the important interactions in a Nuclear Power Plant; design, operating conditions, maintenance, human actuations, stochastic events, etc. In order to that, it requires the integration of codes to obtain: Thermohydraulic and human actuations; Even trees delineation; Fault Trees and Event Trees quantification; Uncertainty analysis and risk assessment. This written dissertation narrates the application of the ISA methodology to the initiating event of the Loss of the Component Cooling System (CCWS) generating sequences of loss of reactor coolant through the seals of the reactor coolant pump (SLOCA). It is used to test the change in margins with respect to the maximum clad temperature limit (1477 K) that would be possible under a potential 10 % power up-rate effected in the pressurized water reactor of Zion NPP. The work done to achieve the thesis, fruit of the collaborative agreement of the School of Mining and Energy Engineering and the company of technological solutions Ekergy Software S.L. (NFQ Solutions) with de specialized modeling and simulation branch of the CSN, has been the basis for the contribution of the CSN in the exercise SM2A. This exercise has been used as an assessment of the development of some of the ideas, suggestions, and algorithms behind the ISA methodology. It has been obtained a slight increase in the Damage Exceedance Frequency (DEF) caused by the power up-rate. This result shows that ISA methodology allows quantifying the safety margin change when design modifications are performed in a NPP and is specially suitable for scenarios where stochastic events or human responses have an important role to prevent or mitigate the accidental consequences and the total risk. The results do not have any validity out of showing the viability of the methodology ISA. Zion NPP was retired and information of its safety analysis is scarce, so assumptions without verification or approximations based on generic studies have been required. Three phases are established in the analysis process: first, obtaining the reference dynamic event tree; second, uncertainty analysis and obtaining the damage domains; third, risk quantification. There have been shown various applications of the methodology and advantages over the classical PSA. It has also contributed to the development of the prototype tool for the implementation of the ISA methodology (SCAIS).
Resumo:
A genetic locus suppressing DNA underreplication in intercalary heterochromatin (IH) and pericentric heterochromatin (PH) of the polytene chromosomes of Drosophila melanogaster salivary glands, has been described. Found in the In(1)scV2 strain, the mutation, designated as Su(UR)ES, was located on chromosome 3L at position 34.8 and cytologically mapped to region 68A3-B4. A cytological phenotype was observed in the salivary gland chromosomes of larvae homozygous and hemizygous for Su(UR)ES: (i) in the IH regions, that normally are incompletely polytenized and so they often break to form “weak points,” underreplication is suppressed, breaks and ectopic contacts disappear; (ii) the degree of polytenization in PH grows higher. That is why the regions in chromosome arm basements, normally β-heterochromatic, acquire a distinct banding pattern, i.e., become euchromatic by morphological criteria; (iii) an additional bulk of polytenized material arises between the arms of chromosome 3 to form a fragment with a typical banding pattern. Chromosome 2 PH reveals additional α-heterochromatin. Su(UR)ES does not affect the viability, fertility, or morphological characters of the imago, and has semidominant expression in the heterozygote and distinct maternal effect. The results obtained provide evidence that the processes leading to DNA underreplication in IH and PH are affected by the same genetic mechanism.
Resumo:
Keratinocyte growth factor (KGF) is a member of the fibroblast growth factor family. Portions of the gene encoding KGF were amplified during primate evolution and are present in multiple nonprocessed copies in the human genome. Nucleotide analysis of a representative sampling of these KGF-like sequences indicated that they were at least 95% identical to corresponding regions of the KGF gene. To localize these sequences to specific chromosomal sites in human and higher primates, we used fluorescence in situ hybridization. In human, using a cosmid probe encoding KGF exon 1, we assigned the location of the KGF gene to chromosome 15q15–21.1. In addition, copies of KGF-like sequences hybridizing only with a cosmid probe encoding exons 2 and 3 were localized to dispersed sites on chromosome 2q21, 9p11, 9q12–13, 18p11, 18q11, 21q11, and 21q21.1. The distribution of KGF-like sequences suggests a role for alphoid DNA in their amplification and dispersion. In chimpanzee, KGF-like sequences were observed at five chromosomal sites, which were each homologous to sites in human, while in gorilla, a subset of four of these homologous sites was identified; in orangutan two sites were identified, while gibbon exhibited only a single site. The chromosomal localization of KGF sequences in human and great ape genomes indicates that amplification and dispersion occurred in multiple discrete steps, with initial KGF gene duplication and dispersion taking place in gibbon and involving loci corresponding to human chromosomes 15 and 21. These findings support the concept of a closer evolutionary relationship of human and chimpanzee and a possible selective pressure for such dispersion during the evolution of higher primates.
Resumo:
The vascular endothelial growth factor (VEGF) has been shown to be a significant mediator of angiogenesis during a variety of normal and pathological processes, including tumor development. Human U87MG glioblastoma cells express the three VEGF isoforms: VEGF121, VEGF165, and VEGF189. Here, we have investigated whether these three isoforms have distinct roles in glioblastoma angiogenesis. Clones that overexpressed each isoform were derived and inoculated into mouse brains. Mice that received VEGF121- and VEGF165-overexpressing cells developed intracerebral hemorrhages after 60–90 hr. In contrast, mice implanted with VEGF189-overexpressing cells had only slightly larger tumors than those caused by parental cells and little evidence of hemorrhage at these early times after implantation, whereas, after longer periods of growth, enhanced angiogenicity and tumorigenicity were apparent. There was rapid blood vessel growth and breakdown around the tumors caused by cells overexpressing VEGF121 and VEGF165, whereas there was similar vascularization but no eruption in the vicinity of those tumors caused by cells overexpressing VEGF189, and none on the border of the tumors caused by the parental cells. Thus, by introducing VEGF-overexpressing glioblastoma cells into the brain, we have established a reproducible and predictable in vivo model of tumor-associated intracerebral hemorrhage caused by the enhanced expression of single molecular species. Such a model should be useful for uncovering the role of VEGF isoforms in the mechanisms of angiogenesis and for investigating intracerebral hemorrhage due to ischemic stroke or congenital malformations.
Resumo:
Vascular endothelial growth factor (VEGF) is a homodimeric member of the cystine knot family of growth factors, with limited sequence homology to platelet-derived growth factor (PDGF) and transforming growth factor β2 (TGF-β). We have determined its crystal structure at a resolution of 2.5 Å, and identified its kinase domain receptor (KDR) binding site using mutational analysis. Overall, the VEGF monomer resembles that of PDGF, but its N-terminal segment is helical rather than extended. The dimerization mode of VEGF is similar to that of PDGF and very different from that of TGF-β. Mutational analysis of VEGF reveals that symmetrical binding sites for KDR are located at each pole of the VEGF homodimer. Each site contains two functional “hot spots” composed of binding determinants presented across the subunit interface. The two most important determinants are located within the largest hot spot on a short, three-stranded sheet that is conserved in PDGF and TGF-β. Functional analysis of the binding epitopes for two receptor-blocking antibodies reveal different binding determinants near each of the KDR binding hot spots.
Resumo:
Brefeldin A (BFA) inhibited the exchange of ADP ribosylation factor (ARF)-bound GDP for GTP by a Golgi-associated guanine nucleotide-exchange protein (GEP) [Helms, J. B. & Rothman, J. E. (1992) Nature (London) 360, 352–354; Donaldson, J. G., Finazzi, D. & Klausner, R. D. (1992) Nature (London) 360, 350–352]. Cytosolic ARF GEP was also inhibited by BFA, but after purification from bovine brain and rat spleen, it was no longer BFA-sensitive [Tsai, S.-C., Adamik, R., Moss, J. & Vaughan, M. (1996) Proc. Natl. Acad. Sci. USA 93, 305–309]. We describe here purification from bovine brain cytosol of a BFA-inhibited GEP. After chromatography on DEAE–Sephacel, hydroxylapatite, and Mono Q and precipitation at pH 5.8, GEP was eluted from Superose 6 as a large molecular weight complex at the position of thyroglobulin (≈670 kDa). After SDS/PAGE of samples from column fractions, silver-stained protein bands of ≈190 and 200 kDa correlated with activity. BFA-inhibited GEP activity of the 200-kDa protein was demonstrated following electroelution from the gel and renaturation by dialysis. Four tryptic peptides from the 200-kDa protein had amino acid sequences that were 47% identical to sequences in Sec7 from Saccharomyces cerevisiae (total of 51 amino acids), consistent with the view that the BFA-sensitive 200-kDa protein may be a mammalian counterpart of Sec7 that plays a similar role in cellular vesicular transport and Sec7 may be a GEP for one or more yeast ARFs.
Resumo:
Accumulative evidence suggests that more than 20 neuron-specific genes are regulated by a transcriptional cis-regulatory element known as the neural restrictive silencer (NRS). A trans-acting repressor that binds the NRS, NRSF [also designated RE1-silencing transcription factor (REST)] has been cloned, but the mechanism by which it represses transcription is unknown. Here we show evidence that NRSF represses transcription of its target genes by recruiting mSin3 and histone deacetylase. Transfection experiments using a series of NRSF deletion constructs revealed the presence of two repression domains, RD-1 and RD-2, within the N- and C-terminal regions, respectively. A yeast two-hybrid screen using the RD-1 region as a bait identified a short form of mSin3B. In vitro pull-down assays and in vivo immunoprecipitation-Western analyses revealed a specific interaction between NRSF-RD1 and mSin3 PAH1-PAH2 domains. Furthermore, NRSF and mSin3 formed a complex with histone deacetylase 1, suggesting that NRSF-mediated repression involves histone deacetylation. When the deacetylation of histones was inhibited by tricostatin A in non-neuronal cells, mRNAs encoding several neuronal-specific genes such as SCG10, NMDAR1, and choline acetyltransferase became detectable. These results indicate that NRSF recruits mSin3 and histone deacetylase 1 to silence neural-specific genes and suggest further that repression of histone deacetylation is crucial for transcriptional activation of neural-specific genes during neuronal terminal differentiation.