879 resultados para Lead based paint
Resumo:
Experiences in decentralized rural electrification programmes using solar home systems have suffered difficulties during the operation and maintenance phase, due in many cases, to the underestimation of the maintenance cost, because of the decentralized character of the activity, and also because the reliability of the solar home system components is frequently unknown. This paper reports on the reliability study and cost characterization achieved in a large photovoltaic rural electrification programme carried out in Morocco. The paper aims to determinate the reliability features of the solar systems, focusing in the in-field testing for batteries and photovoltaic modules. The degradation rates for batteries and PV modules have been extracted from the in-field experiments. On the other hand, the main costs related to the operation and maintenance activity have been identified with the aim of establishing the main factors that lead to the failure of the quality sustainability in many rural electrification programmes.
Resumo:
The objective of the present study is to develop fully renewable and environmentally benign techniques for improving the fire safety of flexible polyurethane foams (PUFs). A multilayered coating made from cationic chitosan (CS) and anionic alginate (AL) was deposited on PUFs through layer-by-layer assembly. This coating system exhibits a slight influence on the thermal stability of PUF, but significantly improves the char formation during combustion. Cone calorimetry reveals that 10 CS-AL bilayers (only 5.7% of the foams weight) lead to a 66% and 11% reduction in peak heat release rate and total heat release, respectively, compared with those of the uncoated control. The notable decreased fire hazards of PUF are attributed to the CS-AL coatings being beneficial to form an insulating protective layer on the surface of burning materials that inhibits the oxygen and heat permeation and slows down the flammable gases in the vapor phase, and thereby improves the flame resistance. This water-based, environmentally benign natural coating will stimulate further efforts in improving fire safety for a variety of polymer substrates.
Resumo:
Energía termosolar (de concentración) es uno de los nombres que hacen referencia en español al término inglés “concentrating solar power”. Se trata de una tecnología basada en la captura de la potencia térmica de la radiación solar, de forma que permita alcanzar temperaturas capaces de alimentar un ciclo termodinámico convencional (o avanzado); el futuro de esta tecnología depende principalmente de su capacidad para concentrar la radiación solar de manera eficiente y económica. La presente tesis está orientada hacia la resolución de ciertos problemas importantes relacionados con este objetivo. La mencionada necesidad de reducir costes en la concentración de radiación solar directa, asegurando el objetivo termodinámico de calentar un fluido hasta una determinada temperatura, es de vital importancia. Los colectores lineales Fresnel han sido identificados en la literatura científica como una tecnología con gran potencial para alcanzar esta reducción de costes. Dicha tecnología ha sido seleccionada por numerosas razones, entre las que destacan su gran libertad de diseño y su actual estado inmaduro. Con el objetivo de responder a este desafío se desarrollado un detallado estudio de las propiedades ópticas de los colectores lineales Fresnel, para lo cual se han utilizado métodos analíticos y numéricos de manera combinada. En primer lugar, se han usado unos modelos para la predicción de la localización y la irradiación normal directa del sol junto a unas relaciones analíticas desarrolladas para estudiar el efecto de múltiples variables de diseño en la energía incidente sobre los espejos. Del mismo modo, se han obtenido analíticamente los errores debidos al llamado “off-axis aberration”, a la apertura de los rayos reflejados en los espejos y a las sombras y bloqueos entre espejos. Esto ha permitido la comparación de diferentes formas de espejo –planos, circulares o parabólicos–, así como el diseño preliminar de la localización y anchura de los espejos y receptor sin necesidad de costosos métodos numéricos. En segundo lugar, se ha desarrollado un modelo de trazado de rayos de Monte Carlo con el objetivo de comprobar la validez del estudio analítico, pero sobre todo porque este no es preciso en el estudio de la reflexión en espejos. El código desarrollado está específicamente ideado para colectores lineales Fresnel, lo que ha permitido la reducción del tiempo de cálculo en varios órdenes de magnitud en comparación con un programa comercial más general. Esto justifica el desarrollo de un nuevo código en lugar de la compra de una licencia de otro programa. El modelo ha sido usado primeramente para comparar la intensidad de flujo térmico y rendimiento de colectores Fresnel, con y sin reflector secundario, con los colectores cilíndrico parabólicos. Finalmente, la conjunción de los resultados obtenidos en el estudio analítico con el programa numérico ha sido usada para optimizar el campo solar para diferentes orientaciones –Norte-Sur y Este-Oeste–, diferentes localizaciones –Almería y Aswan–, diferentes inclinaciones hacia el Trópico –desde 0 deg hasta 32 deg– y diferentes mínimos de intensidad del flujo en el centro del receptor –10 kW/m2 y 25 kW/m2–. La presente tesis ha conducido a importantes descubrimientos que deben ser considerados a la hora de diseñar un campo solar Fresnel. En primer lugar, los espejos utilizados no deben ser plano, sino cilíndricos o parabólicos, ya que los espejos curvos implican mayores concentraciones y rendimiento. Por otro lado, se ha llegado a la conclusión de que la orientación Este-Oeste es más propicia para localizaciones con altas latitudes, como Almería, mientras que en zonas más cercanas a los trópicos como Aswan los campos Norte-Sur conducen a mayores rendimientos. Es de destacar que la orientación Este-Oeste requiere aproximadamente la mitad de espejos que los campos Norte-Sur, puediendo estar inclinados hacia los Trópicos para mejorar el rendimiento, y que alcanzan parecidos valores de intensidad térmica en el receptor todos los días a mediodía. Sin embargo, los campos con orientación Norte-Sur permiten un flujo más constante a lo largo de un día. Por último, ha sido demostrado que el uso de diseños pre-optimizados analíticamente, con anchura de espejos y espaciado entre espejos variables a lo ancho del campo, pueden implicar aumentos de la energía generada por metro cuadrado de espejos de hasta el 6%. El rendimiento óptico anual de los colectores cilíndrico parabólicos es 23 % mayor que el rendimiento de los campos Fresnel en Almería, mientras que la diferencia es de solo 9 % en Aswan. Ello implica que, para alcanzar el mismo precio de electricidad que la tecnología de referencia, la reducción de costes de instalación por metro cuadrado de espejo debe estar entre el 10 % y el 25 %, y que los colectores lineales Fresnel tienen más posibilidades de ser desarrollados en zonas de bajas latitudes. Como consecuencia de los estudios desarrollados en esta tesis se ha patentado un sistema de almacenamiento que tiene en cuenta la variación del flujo térmico en el receptor a lo largo del día, especialmente para campos con orientación Este-Oeste. Este invento permitiría el aprovechamiento de la energía incidente durante más parte del año, aumentando de manera apreciable los rendimientos óptico y térmico. Abstract Concentrating solar power is the common name of a technology based on capturing the thermal power of solar radiation, in a suitable way to reach temperatures able to activate a conventional (or advanced) thermodynamic cycle to generate electricity; this quest mainly depends on our ability to concentrate solar radiation in a cheap and efficient way. The present thesis is focused to highlight and help solving some of the important issues related to this problem. The need of reducing costs in concentrating the direct solar radiation, but without jeopardizing the thermodynamic objective of heating a fluid up to the required temperature, is of prime importance. Linear Fresnel collectors have been identified in the scientific literature as a technology with high potential to reach this cost reduction. This technology has been selected because of a number of reasons, particularly the degrees of freedom of this type of concentrating configuration and its current immature state. In order to respond to this challenge, a very detailed exercise has been carried out on the optical properties of linear Fresnel collectors. This has been done combining analytic and numerical methods. First, the effect of the design variables on the ratio of energy impinging onto the reflecting surface has been studied using analytically developed equations, together with models that predict the location and direct normal irradiance of the sun at any moment. Similarly, errors due to off-axis aberration, to the aperture of the reflected energy beam and to shading and blocking effects have been obtained analytically. This has allowed the comparison of different shapes of mirrors –flat, cylindrical or parabolic–, as well as a preliminary optimization of the location and width of mirrors and receiver with no need of time-consuming numerical models. Second, in order to prove the validity of the analytic results, but also due to the fact that the study of the reflection process is not precise enough when using analytic equations, a Monte Carlo Ray Trace model has been developed. The developed code is designed specifically for linear Fresnel collectors, which has reduced the computing time by several orders of magnitude compared to a wider commercial software. This justifies the development of the new code. The model has been first used to compare radiation flux intensities and efficiencies of linear Fresnel collectors, both multitube receiver and secondary reflector receiver technologies, with parabolic trough collectors. Finally, the results obtained in the analytic study together with the numeric model have used in order to optimize the solar field for different orientations –North-South and East-West–, different locations –Almería and Aswan–, different tilts of the field towards the Tropic –from 0 deg to 32 deg– and different flux intensity minimum requirements –10 kW/m2 and 25 kW/m2. This thesis work has led to several important findings that should be considered in the design of Fresnel solar fields. First, flat mirrors should not be used in any case, as cylindrical and parabolic mirrors lead to higher flux intensities and efficiencies. Second, it has been concluded that, in locations relatively far from the Tropics such as Almería, East-West embodiments are more efficient, while in Aswan North- South orientation leads to a higher annual efficiency. It must be noted that East-West oriented solar fields require approximately half the number of mirrors than NS oriented fields, can be tilted towards the Equator in order to increase the efficiency and attain similar values of flux intensity at the receiver every day at midday. On the other hand, in NS embodiments the flux intensity is more even during each single day. Finally, it has been proved that the use of analytic designs with variable shift between mirrors and variable width of mirrors across the field can lead to improvements in the electricity generated per reflecting surface square meter up to 6%. The annual optical efficiency of parabolic troughs has been found to be 23% higher than the efficiency of Fresnel fields in Almería, but it is only around 9% higher in Aswan. This implies that, in order to attain the same levelized cost of electricity than parabolic troughs, the required reduction of installation costs per mirror square meter is in the range of 10-25%. Also, it is concluded that linear Fresnel collectors are more suitable for low latitude areas. As a consequence of the studies carried out in this thesis, an innovative storage system has been patented. This system takes into account the variation of the flux intensity along the day, especially for East-West oriented solar fields. As a result, the invention would allow to exploit the impinging radiation along longer time every day, increasing appreciably the optical and thermal efficiencies.
Resumo:
In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a ? -estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier?Stokes equations. It is shown that the two quasi- a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.
Resumo:
LLas nuevas tecnologías orientadas a la nube, el internet de las cosas o las tendencias "as a service" se basan en el almacenamiento y procesamiento de datos en servidores remotos. Para garantizar la seguridad en la comunicación de dichos datos al servidor remoto, y en el manejo de los mismos en dicho servidor, se hace uso de diferentes esquemas criptográficos. Tradicionalmente, dichos sistemas criptográficos se centran en encriptar los datos mientras no sea necesario procesarlos (es decir, durante la comunicación y almacenamiento de los mismos). Sin embargo, una vez es necesario procesar dichos datos encriptados (en el servidor remoto), es necesario desencriptarlos, momento en el cual un intruso en dicho servidor podría a acceder a datos sensibles de usuarios del mismo. Es más, este enfoque tradicional necesita que el servidor sea capaz de desencriptar dichos datos, teniendo que confiar en la integridad de dicho servidor de no comprometer los datos. Como posible solución a estos problemas, surgen los esquemas de encriptación homomórficos completos. Un esquema homomórfico completo no requiere desencriptar los datos para operar con ellos, sino que es capaz de realizar las operaciones sobre los datos encriptados, manteniendo un homomorfismo entre el mensaje cifrado y el mensaje plano. De esta manera, cualquier intruso en el sistema no podría robar más que textos cifrados, siendo imposible un robo de los datos sensibles sin un robo de las claves de cifrado. Sin embargo, los esquemas de encriptación homomórfica son, actualmente, drás-ticamente lentos comparados con otros esquemas de encriptación clásicos. Una op¬eración en el anillo del texto plano puede conllevar numerosas operaciones en el anillo del texto encriptado. Por esta razón, están surgiendo distintos planteamientos sobre como acelerar estos esquemas para un uso práctico. Una de las propuestas para acelerar los esquemas homomórficos consiste en el uso de High-Performance Computing (HPC) usando FPGAs (Field Programmable Gate Arrays). Una FPGA es un dispositivo semiconductor que contiene bloques de lógica cuya interconexión y funcionalidad puede ser reprogramada. Al compilar para FPGAs, se genera un circuito hardware específico para el algorithmo proporcionado, en lugar de hacer uso de instrucciones en una máquina universal, lo que supone una gran ventaja con respecto a CPUs. Las FPGAs tienen, por tanto, claras difrencias con respecto a CPUs: -Arquitectura en pipeline: permite la obtención de outputs sucesivos en tiempo constante -Posibilidad de tener multiples pipes para computación concurrente/paralela. Así, en este proyecto: -Se realizan diferentes implementaciones de esquemas homomórficos en sistemas basados en FPGAs. -Se analizan y estudian las ventajas y desventajas de los esquemas criptográficos en sistemas basados en FPGAs, comparando con proyectos relacionados. -Se comparan las implementaciones con trabajos relacionados New cloud-based technologies, the internet of things or "as a service" trends are based in data storage and processing in a remote server. In order to guarantee a secure communication and handling of data, cryptographic schemes are used. Tradi¬tionally, these cryptographic schemes focus on guaranteeing the security of data while storing and transferring it, not while operating with it. Therefore, once the server has to operate with that encrypted data, it first decrypts it, exposing unencrypted data to intruders in the server. Moreover, the whole traditional scheme is based on the assumption the server is reliable, giving it enough credentials to decipher data to process it. As a possible solution for this issues, fully homomorphic encryption(FHE) schemes is introduced. A fully homomorphic scheme does not require data decryption to operate, but rather operates over the cyphertext ring, keeping an homomorphism between the cyphertext ring and the plaintext ring. As a result, an outsider could only obtain encrypted data, making it impossible to retrieve the actual sensitive data without its associated cypher keys. However, using homomorphic encryption(HE) schemes impacts performance dras-tically, slowing it down. One operation in the plaintext space can lead to several operations in the cyphertext space. Because of this, different approaches address the problem of speeding up these schemes in order to become practical. One of these approaches consists in the use of High-Performance Computing (HPC) using FPGAs (Field Programmable Gate Array). An FPGA is an integrated circuit designed to be configured by a customer or a designer after manufacturing - hence "field-programmable". Compiling into FPGA means generating a circuit (hardware) specific for that algorithm, instead of having an universal machine and generating a set of machine instructions. FPGAs have, thus, clear differences compared to CPUs: - Pipeline architecture, which allows obtaining successive outputs in constant time. -Possibility of having multiple pipes for concurrent/parallel computation. Thereby, In this project: -We present different implementations of FHE schemes in FPGA-based systems. -We analyse and study advantages and drawbacks of the implemented FHE schemes, compared to related work.
Resumo:
El propósito de esta tesis es presentar una metodología para realizar análisis de la dinámica en pequeña señal y el comportamiento de sistemas de alimentación distribuidos de corriente continua (CC), formados por módulos comerciales. Para ello se hace uso de un método sencillo que indica los márgenes de estabilidad menos conservadores posibles mediante un solo número. Este índice es calculado en cada una de las interfaces que componen el sistema y puede usarse para obtener un índice global que indica la estabilidad del sistema global. De esta manera se posibilita la comparación de sistemas de alimentación distribuidos en términos de robustez. La interconexión de convertidores CC-CC entre ellos y con los filtros EMI necesarios puede originar interacciones no deseadas que dan lugar a la degradación del comportamiento de los convertidores, haciendo el sistema más propenso a inestabilidades. Esta diferencia en el comportamiento se debe a interacciones entre las impedancias de los diversos elementos del sistema. En la mayoría de los casos, los sistemas de alimentación distribuida están formados por módulos comerciales cuya estructura interna es desconocida. Por ello los análisis presentados en esta tesis se basan en medidas de la respuesta en frecuencia del convertidor que pueden realizarse desde los terminales de entrada y salida del mismo. Utilizando las medidas de las impedancias de entrada y salida de los elementos del sistema, se puede construir una función de sensibilidad que proporciona los márgenes de estabilidad de las diferentes interfaces. En esta tesis se utiliza el concepto del valor máximo de la función de sensibilidad (MPC por sus siglas en inglés) para indicar los márgenes de estabilidad como un único número. Una vez que la estabilidad de todas las interfaces del sistema se han evaluado individualmente, los índices obtenidos pueden combinarse para obtener un único número con el que comparar la estabilidad de diferentes sistemas. Igualmente se han analizado las posibles interacciones en la entrada y la salida de los convertidores CC-CC, obteniéndose expresiones analíticas con las que describir en detalle los acoplamientos generados en el sistema. Los estudios analíticos realizados se han validado experimentalmente a lo largo de la tesis. El análisis presentado en esta tesis se culmina con la obtención de un índice que condensa los márgenes de estabilidad menos conservativos. También se demuestra que la robustez del sistema está asegurada si las impedancias utilizadas en la función de sensibilidad se obtienen justamente en la entrada o la salida del subsistema que está siendo analizado. Por otra parte, la tesis presenta un conjunto de parámetros internos asimilados a impedancias, junto con sus expresiones analíticas, que permiten una explicación detallada de las interacciones en el sistema. Dichas expresiones analíticas pueden obtenerse bien mediante las funciones de transferencia analíticas si se conoce la estructura interna, o utilizando medidas en frecuencia o identificación de las mismas a través de la respuesta temporal del convertidor. De acuerdo a las metodologías presentadas en esta tesis se puede predecir la estabilidad y el comportamiento de sistemas compuestos básicamente por convertidores CC-CC y filtros, cuya estructura interna es desconocida. La predicción se basa en un índice que condensa la información de los márgenes de estabilidad y que permite la obtención de un indicador de la estabilidad global de todo el sistema, permitiendo la comparación de la estabilidad de diferentes arquitecturas de sistemas de alimentación distribuidos. ABSTRACT The purpose of this thesis is to present dynamic small-signal stability and performance analysis methodology for dc-distributed systems consisting of commercial power modules. Furthermore, the objective is to introduce simple method to state the least conservative margins for robust stability as a single number. In addition, an index characterizing the overall system stability is obtained, based on which different dc-distributed systems can be compared in terms of robustness. The interconnected systems are prone to impedance-based interactions which might lead to transient-performance degradation or even instability. These systems typically are constructed using commercial converters with unknown internal structure. Therefore, the analysis presented throughout this thesis is based on frequency responses measurable from the input and output terminals. The stability margins are stated utilizing a concept of maximum peak criteria, derived from the behavior of impedance-based sensitivity function that provides a single number to state robust stability. Using this concept, the stability information at every system interface is combined to a meaningful number to state the average robustness of the system. In addition, theoretical formulas are extracted to assess source and load side interactions in order to describe detailed couplings within the system. The presented theoretical analysis methodologies are experimentally validated throughout the thesis. In this thesis, according to the presented analysis, the least conservative stability margins are provided as a single number guaranteeing robustness. It is also shown that within the interconnected system the robust stability is ensured only if the impedance-based minor-loop gain is determined at the very input or output of each subsystem. Moreover, a complete set of impedance-type internal parameters as well as the formulas according to which the interaction sensitivity can be fully explained and analyzed, is provided. The given formulation can be utilized equally either based on measured frequency responses, time-domain identified internal parameters or extracted analytic transfer functions. Based on the analysis methodologies presented in this thesis, the stability and performance of interconnected systems consisting of converters with unknown internal structure, can be predicted. Moreover, the provided concept to assess the least conservative stability margins enables to obtain an index to state the overall robust stability of distributed power architecture and thus to compare different systems in terms of stability.
Resumo:
Serotonin N-acetyltransferase is the enzyme responsible for the diurnal rhythm of melatonin production in the pineal gland of animals and humans. Inhibitors of this enzyme active in cell culture have not been reported previously. The compound N-bromoacetyltryptamine was shown to be a potent inhibitor of this enzyme in vitro and in a pineal cell culture assay (IC50 ≈ 500 nM). The mechanism of inhibition is suggested to involve a serotonin N-acetyltransferase-catalyzed alkylation reaction between N-bromoacetyltryptamine and reduced CoA, resulting in the production of a tight-binding bisubstrate analog inhibitor. This alkyltransferase activity is apparently catalyzed at a functionally distinct site compared with the acetyltransferase activity active site on serotonin N-acetyltransferase. Such active site plasticity is suggested to result from a subtle conformational alteration in the protein. This plasticity allows for an unusual form of mechanism-based inhibition with multiple turnovers, resulting in “molecular fratricide.” N-bromoacetyltryptamine should serve as a useful tool for dissecting the role of melatonin in circadian rhythm as well as a potential lead compound for therapeutic use in mood and sleep disorders.
Resumo:
In mammals the retina contains photoactive molecules responsible for both vision and circadian photoresponse systems. Opsins, which are located in rods and cones, are the pigments for vision but it is not known whether they play a role in circadian regulation. A subset of retinal ganglion cells with direct projections to the suprachiasmatic nucleus (SCN) are at the origin of the retinohypothalamic tract that transmits the light signal to the master circadian clock in the SCN. However, the ganglion cells are not known to contain rhodopsin or other opsins that may function as photoreceptors. We have found that the two blue-light photoreceptors, cryptochromes 1 and 2 (CRY1 and CRY2), recently discovered in mammals are specifically expressed in the ganglion cell and inner nuclear layers of the mouse retina. In addition, CRY1 is expressed at high level in the SCN and oscillates in this tissue in a circadian manner. These data, in conjunction with the established role of CRY2 in photoperiodism in plants, lead us to propose that mammals have a vitamin A-based photopigment (opsin) for vision and a vitamin B2-based pigment (cryptochrome) for entrainment of the circadian clock.
Resumo:
Mouse mast cells express gp49B1, a cell-surface member of the Ig superfamily encoded by the gp49B gene. We now report that by ALIGN comparison of the amino acid sequence of gp49B1 with numerous receptors of the Ig superfamily, a newly recognized family has been established that includes gp49B1, the human myeloid cell Fc receptor for IgA, the bovine myeloid cell Fc receptor for IgG2, and the human killer cell inhibitory receptors expressed on natural killer cells and T lymphocyte subsets. Furthermore, the cytoplasmic domain of gp49B1 contains two immunoreceptor tyrosine-based inhibition motifs that are also present in killer cell inhibitory receptors; these motifs downregulate natural killer cell and T-cell activation signals that lead to cytotoxic activity. As assessed by flow cytometry with transfectants that express either gp49B1 or gp49A, which are 89% identical in the amino acid sequences of their extracellular domains, mAb B23.1 was shown to recognize only gp49B1. Coligation of mAb B23.1 bound to gp49B1 and IgE fixed to the high-affinity Fc receptor for IgE on the surface of mouse bone marrow-derived mast cells inhibited exocytosis in a dose-related manner, as defined by the release of the secretory granule constituent beta-hexosaminidase, as well as the generation of the membrane-derived lipid mediator, leukotriene C4. Thus, gp49B1 is an immunoreceptor tyrosine-based inhibition motif-containing integral cell-surface protein that downregulates the high-affinity Fc receptor for IgE-mediated release of proinflammatory mediators from mast cells. Our findings establish a novel counterregulatory transmembrane pathway by which mast cell activation can be inhibited.
Resumo:
The search for novel leads is a critical step in the drug discovery process. Computational approaches to identify new lead molecules have focused on discovering complete ligands by evaluating the binding affinity of a large number of candidates, a task of considerable complexity. A new computational method is introduced in this work based on the premise that the primary molecular recognition event in the protein binding site may be accomplished by small core fragments that serve as molecular anchors, providing a structurally stable platform that can be subsequently tailored into complete ligands. To fulfill its role, we show that an effective molecular anchor must meet both the thermodynamic requirement of relative energetic stability of a single binding mode and its consistent kinetic accessibility, which may be measured by the structural consensus of multiple docking simulations. From a large number of candidates, this technique is able to identify known core fragments responsible for primary recognition by the FK506 binding protein (FKBP-12), along with a diverse repertoire of novel molecular cores. By contrast, absolute energetic criteria for selecting molecular anchors are found to be promiscuous. A relationship between a minimum frustration principle of binding energy landscapes and receptor-specific molecular anchors in their role as "recognition nuclei" is established, thereby unraveling a mechanism of lead discovery and providing a practical route to receptor-biased computational combinatorial chemistry.
Resumo:
Vaccination with synthetic peptides representing cytotoxic T lymphocyte (CTL) epitopes can lead to a protective CTL-mediated immunity against tumors or viruses. We now report that vaccination with a CTL epitope derived from the human adenovirus type 5 E1A-region (Ad5E1A234-243), which can serve as a target for tumor-eradicating CTL, enhances rather than inhibits the growth of Ad5E1A-expressing tumors. This adverse effect of peptide vaccination was rapidly evoked, required low doses of peptide (10 micrograms), and was achieved by a mode of peptide delivery that induces protective T-cell-mediated immunity in other models. Ad5E1A-specific CTL activity could no longer be isolated from mice after injection of Ad5E1A-peptide, indicating that tolerization of Ad5E1A-specific CTL activity causes the enhanced tumor outgrowth. In contrast to peptide vaccination, immunization with adenovirus, expressing Ad5E1A, induced Ad5E1A-specific immunity and prevented the outgrowth of Ad5E1A-expressing tumors. These results show that immunization with synthetic peptides can lead to the elimination of anti-tumor CTL responses. These findings are important for the design of safe peptide-based vaccines against tumors, allogeneic organ transplants, and T-cell-mediated autoimmune diseases.
Resumo:
A central theme of cognitive neuroscience is that different parts of the brain perform different functions. Recent evidence from neuropsychology suggests that even the processing of arbitrary stimulus categories that are defined solely by cultural conventions (e.g., letters versus digits) can become spatially segregated in the cerebral cortex. How could the processing of stimulus categories that are not innate and that have no inherent structural differences become segregated? We propose that the temporal clustering of stimuli from a given category interacts with Hebbian learning to lead to functional localization. Neural network simulations bear out this hypothesis.
Resumo:
A class of potent nonpeptidic inhibitors of human immunodeficiency virus protease has been designed by using the three-dimensional structure of the enzyme as a guide. By employing iterative protein cocrystal structure analysis, design, and synthesis the binding affinity of the lead compound was incrementally improved by over four orders of magnitude. An inversion in inhibitor binding mode was observed crystallographically, providing information critical for subsequent design and highlighting the utility of structural feedback in inhibitor optimization. These inhibitors are selective for the viral protease enzyme, possess good antiviral activity, and are orally available in three species.
Resumo:
We quantify the rate and efficiency of picosecond electron transfer (ET) from PbS nanocrystals, grown by successive ionic layer adsorption and reaction (SILAR), into a mesoporous SnO2 support. Successive SILAR deposition steps allow for stoichiometry- and size-variation of the QDs, characterized using transmission electron microscopy. Whereas for sulfur-rich (p-type) QD surfaces substantial electron trapping at the QD surface occurs, for lead-rich (n-type) QD surfaces, the QD trapping channel is suppressed and the ET efficiency is boosted. The ET efficiency increase achieved by lead-rich QD surfaces is found to be QD-size dependent, increasing linearly with QD surface area. On the other hand, ET rates are found to be independent of both QD size and surface stoichiometry, suggesting that the donor–acceptor energetics (constituting the driving force for ET) are fixed due to Fermi level pinning at the QD/oxide interface. Implications of our results for QD-sensitized solar cell design are discussed.
Resumo:
The economic design of a distillation column or distillation sequences is a challenging problem that has been addressed by superstructure approaches. However, these methods have not been widely used because they lead to mixed-integer nonlinear programs that are hard to solve, and require complex initialization procedures. In this article, we propose to address this challenging problem by substituting the distillation columns by Kriging-based surrogate models generated via state of the art distillation models. We study different columns with increasing difficulty, and show that it is possible to get accurate Kriging-based surrogate models. The optimization strategy ensures that convergence to a local optimum is guaranteed for numerical noise-free models. For distillation columns (slightly noisy systems), Karush–Kuhn–Tucker optimality conditions cannot be tested directly on the actual model, but still we can guarantee a local minimum in a trust region of the surrogate model that contains the actual local minimum.