997 resultados para complex-coupled
Resumo:
Authigenic carbonates forming at an active methane-seep on the Makran accretionary prism mainly consist of aragonite in the form of microcrystalline, cryptocrystalline, and botryoidal phases. The d13Ccarbonate values are very negative (-49.0 to -44.0 per mill V-PDB), agreeing with microbial methane as dominant carbon source. The d18Ocarbonate values are exclusively positive (+ 3.0 to + 4.5 per mill V-PDB) and indicate precipitation in equilibrium with seawater at bottom water temperatures. The content of rare earth elements and yttrium (REE + Y) determined by laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) and solution ICP-MS varies for each aragonite variety, with early microcrystalline aragonite yielding the highest, cryptocrystalline aragonite intermediate, and later botryoidal aragonite the lowest REE + Y concentrations. Shale-normalised REE + Y patterns of different types of authigenic carbonate reflect distinct pore fluid compositions during precipitation: Microcrystalline aragonite shows high contents of middle rare earth elements (MREE), reflecting REE patterns ascribed to anoxic pore water. Cryptocrystalline aragonite exhibits a seawater-like REE + Y pattern at elevated total REE + Y concentrations, indicating higher concentrations of REEs in pore waters, which were influenced by seawater. Botryoidal aragonite is characterised by seawater-like REE + Y patterns at initial growth stages followed by an increase of light rare earth elements (LREE) with advancing crystal growth, reflecting changing pore fluid composition during precipitation of this cement. Conventional sample preparation involving micro-drilling of carbonate phases and subsequent solution ICP-MS does not allow to recognise such subtle changes in the REE + Y composition of individual carbonate phases. To be able to reconstruct the evolution of pore water composition during early diagenesis, an analytical approach is required that allows to track the changing elemental composition in a paragenetic sequence as well as in individual phases. High-resolution analysis of seep carbonates from the Makran accretionary prism by LA-ICP-MS reveals that pore fluid composition not only evolved in the course of the formation of different phases, but also changed during the precipitation of individual phases.
Resumo:
A Probabilistic Safety Assessment (PSA) is being developed for a steam-methane reforming hydrogen production plant linked to a High-Temperature Gas Cooled Nuclear Reactor (HTGR). This work is based on the Japan Atomic Energy Research Institute’s (JAERI) High Temperature Test Reactor (HTTR) prototype in Japan. This study has two major objectives: calculate the risk to onsite and offsite individuals, and calculate the frequency of different types of damage to the complex. A simplified HAZOP study was performed to identify initiating events, based on existing studies. The initiating events presented here are methane pipe break, helium pipe break, and PPWC heat exchanger pipe break. Generic data was used for the fault tree analysis and the initiating event frequency. Saphire was used for the PSA analysis. The results show that the average frequency of an accident at this complex is 2.5E-06, which is divided into the various end states. The dominant sequences result in graphite oxidation which does not pose a health risk to the population. The dominant sequences that could affect the population are those that result in a methane explosion and occur 6.6E-8/year, while the other sequences are much less frequent. The health risk presents itself if there are people in the vicinity who could be affected by the explosion. This analysis also demonstrates that an accident in one of the plants has little effect on the other. This is true given the design base distance between the plants, the fact that the reactor is underground, as well as other safety characteristics of the HTGR. Sensitivity studies are being performed in order to determine where additional and improved data is needed.
Resumo:
There are many the requirements that modern power converters should fulfill. Most of the applications where these converters are used, demand smaller converters with high efficiency, improved power density and a fast dynamic response. For instance, loads like microprocessors demand aggressive current steps with very high slew rates (100A/mus and higher); besides, during these load steps, the supply voltage of the microprocessor should be kept within tight limits in order to ensure its correct performance. The accomplishment of these requirements is not an easy task; complex solutions like advanced topologies - such as multiphase converters- as well as advanced control strategies are often needed. Besides, it is also necessary to operate the converter at high switching frequencies and to use capacitors with high capacitance and low ESR. Improving the dynamic response of power converters does not rely only on the control strategy but also the power topology should be suited to enable a fast dynamic response. Moreover, in later years, a fast dynamic response does not only mean accomplishing fast load steps but output voltage steps are gaining importance as well. At least, two applications that require fast voltage changes can be named: Low power microprocessors. In these devices, the voltage supply is changed according to the workload and the operating frequency of the microprocessor is changed at the same time. An important reduction in voltage dependent losses can be achieved with such changes. This technique is known as Dynamic Voltage Scaling (DVS). Another application where important energy savings can be achieved by means of changing the supply voltage are Radio Frequency Power Amplifiers. For example, RF architectures based on ‘Envelope Tracking’ and ‘Envelope Elimination and Restoration’ techniques can take advantage of voltage supply modulation and accomplish important energy savings in the power amplifier. However, in order to achieve these efficiency improvements, a power converter with high efficiency and high enough bandwidth (hundreds of kHz or even tens of MHz) is necessary in order to ensure an adequate supply voltage. The main objective of this Thesis is to improve the dynamic response of DC-DC converters from the point of view of the power topology. And the term dynamic response refers both to the load steps and the voltage steps; it is also interesting to modulate the output voltage of the converter with a specific bandwidth. In order to accomplish this, the question of what is it that limits the dynamic response of power converters should be answered. Analyzing this question leads to the conclusion that the dynamic response is limited by the power topology and specifically, by the filter inductance of the converter which is found in series between the input and the output of the converter. The series inductance is the one that determines the gain of the converter and provides the regulation capability. Although the energy stored in the filter inductance enables the regulation and the capability of filtering the output voltage, it imposes a limitation which is the concern of this Thesis. The series inductance stores energy and prevents the current from changing in a fast way, limiting the slew rate of the current through this inductor. Different solutions are proposed in the literature in order to reduce the limit imposed by the filter inductor. Many publications proposing new topologies and improvements to known topologies can be found in the literature. Also, complex control strategies are proposed with the objective of improving the dynamic response in power converters. In the proposed topologies, the energy stored in the series inductor is reduced; examples of these topologies are Multiphase converters, Buck converter operating at very high frequency or adding a low impedance path in parallel with the series inductance. Control techniques proposed in the literature, focus on adjusting the output voltage as fast as allowed by the power stage; examples of these control techniques are: hysteresis control, V 2 control, and minimum time control. In some of the proposed topologies, a reduction in the value of the series inductance is achieved and with this, the energy stored in this magnetic element is reduced; less stored energy means a faster dynamic response. However, in some cases (as in the high frequency Buck converter), the dynamic response is improved at the cost of worsening the efficiency. In this Thesis, a drastic solution is proposed: to completely eliminate the series inductance of the converter. This is a more radical solution when compared to those proposed in the literature. If the series inductance is eliminated, the regulation capability of the converter is limited which can make it difficult to use the topology in one-converter solutions; however, this topology is suitable for power architectures where the energy conversion is done by more than one converter. When the series inductor is eliminated from the converter, the current slew rate is no longer limited and it can be said that the dynamic response of the converter is independent from the switching frequency. This is the main advantage of eliminating the series inductor. The main objective, is to propose an energy conversion strategy that is done without series inductance. Without series inductance, no energy is stored between the input and the output of the converter and the dynamic response would be instantaneous if all the devices were ideal. If the energy transfer from the input to the output of the converter is done instantaneously when a load step occurs, conceptually it would not be necessary to store energy at the output of the converter (no output capacitor COUT would be needed) and if the input source is ideal, the input capacitor CIN would not be necessary. This last feature (no CIN with ideal VIN) is common to all power converters. However, when the concept is actually implemented, parasitic inductances such as leakage inductance of the transformer and the parasitic inductance of the PCB, cannot be avoided because they are inherent to the implementation of the converter. These parasitic elements do not affect significantly to the proposed concept. In this Thesis, it is proposed to operate the converter without series inductance in order to improve the dynamic response of the converter; however, on the other side, the continuous regulation capability of the converter is lost. It is said continuous because, as it will be explained throughout the Thesis, it is indeed possible to achieve discrete regulation; a converter without filter inductance and without energy stored in the magnetic element, is capable to achieve a limited number of output voltages. The changes between these output voltage levels are achieved in a fast way. The proposed energy conversion strategy is implemented by means of a multiphase converter where the coupling of the phases is done by discrete two-winding transformers instead of coupledinductors since transformers are, ideally, no energy storing elements. This idea is the main contribution of this Thesis. The feasibility of this energy conversion strategy is first analyzed and then verified by simulation and by the implementation of experimental prototypes. Once the strategy is proved valid, different options to implement the magnetic structure are analyzed. Three different discrete transformer arrangements are studied and implemented. A converter based on this energy conversion strategy would be designed with a different approach than the one used to design classic converters since an additional design degree of freedom is available. The switching frequency can be chosen according to the design specifications without penalizing the dynamic response or the efficiency. Low operating frequencies can be chosen in order to favor the efficiency; on the other hand, high operating frequencies (MHz) can be chosen in order to favor the size of the converter. For this reason, a particular design procedure is proposed for the ‘inductorless’ conversion strategy. Finally, applications where the features of the proposed conversion strategy (high efficiency with fast dynamic response) are advantageus, are proposed. For example, in two-stage power architectures where a high efficiency converter is needed as the first stage and there is a second stage that provides the fine regulation. Another example are RF power amplifiers where the voltage is modulated following an envelope reference in order to save power; in this application, a high efficiency converter, capable of achieving fast voltage steps is required. The main contributions of this Thesis are the following: The proposal of a conversion strategy that is done, ideally, without storing energy in the magnetic element. The validation and the implementation of the proposed energy conversion strategy. The study of different magnetic structures based on discrete transformers for the implementation of the proposed energy conversion strategy. To elaborate and validate a design procedure. To identify and validate applications for the proposed energy conversion strategy. It is important to remark that this work is done in collaboration with Intel. The particular features of the proposed conversion strategy enable the possibility of solving the problems related to microprocessor powering in a different way. For example, the high efficiency achieved with the proposed conversion strategy enables it as a good candidate to be used for power conditioning, as a first stage in a two-stage power architecture for powering microprocessors.
Resumo:
The study of lateral dynamics of running trains on bridges is of importance mainly for the safety of the traffic, and may be relevant for laterally compliant bridges. These studies require 3D coupled vehicle-bridge models, and consideration of wheel to rail contact, a phenomenon which is complex and costly to model in detail. We describe here a fully nonlinear coupled model, described in absolute coordinates and incorporated into a commercial finite element framework. Two applications are presented, firstly to a vehicle subject to a strong wind gust traversing a br idge, showing the relevance of the nonlinear wheel-rail contact model as well as the interaction between bridge and vehicle. The second application is to a real viaduct in a high-speed line, with a long continuous deck and tall piers with high lateral compliance. The results show the safety of the traffic as well as the relevance of considering the wind action and the nonlinear response.
Resumo:
Cuando una colectividad de sistemas dinámicos acoplados mediante una estructura irregular de interacciones evoluciona, se observan dinámicas de gran complejidad y fenómenos emergentes imposibles de predecir a partir de las propiedades de los sistemas individuales. El objetivo principal de esta tesis es precisamente avanzar en nuestra comprensión de la relación existente entre la topología de interacciones y las dinámicas colectivas que una red compleja es capaz de mantener. Siendo este un tema amplio que se puede abordar desde distintos puntos de vista, en esta tesis se han estudiado tres problemas importantes dentro del mismo que están relacionados entre sí. Por un lado, en numerosos sistemas naturales y artificiales que se pueden describir mediante una red compleja la topología no es estática, sino que depende de la dinámica que se desarrolla en la red: un ejemplo son las redes de neuronas del cerebro. En estas redes adaptativas la propia topología emerge como consecuencia de una autoorganización del sistema. Para conocer mejor cómo pueden emerger espontáneamente las propiedades comúnmente observadas en redes reales, hemos estudiado el comportamiento de sistemas que evolucionan según reglas adaptativas locales con base empírica. Nuestros resultados numéricos y analíticos muestran que la autoorganización del sistema da lugar a dos de las propiedades más universales de las redes complejas: a escala mesoscópica, la aparición de una estructura de comunidades, y, a escala macroscópica, la existencia de una ley de potencias en la distribución de las interacciones en la red. El hecho de que estas propiedades aparecen en dos modelos con leyes de evolución cuantitativamente distintas que siguen unos mismos principios adaptativos sugiere que estamos ante un fenómeno que puede ser muy general, y estar en el origen de estas propiedades en sistemas reales. En segundo lugar, proponemos una medida que permite clasificar los elementos de una red compleja en función de su relevancia para el mantenimiento de dinámicas colectivas. En concreto, estudiamos la vulnerabilidad de los distintos elementos de una red frente a perturbaciones o grandes fluctuaciones, entendida como una medida del impacto que estos acontecimientos externos tienen en la interrupción de una dinámica colectiva. Los resultados que se obtienen indican que la vulnerabilidad dinámica es sobre todo dependiente de propiedades locales, por tanto nuestras conclusiones abarcan diferentes topologías, y muestran la existencia de una dependencia no trivial entre la vulnerabilidad y la conectividad de los elementos de una red. Finalmente, proponemos una estrategia de imposición de una dinámica objetivo genérica en una red dada e investigamos su validez en redes con diversas topologías que mantienen regímenes dinámicos turbulentos. Se obtiene como resultado que las redes heterogéneas (y la amplia mayora de las redes reales estudiadas lo son) son las más adecuadas para nuestra estrategia de targeting de dinámicas deseadas, siendo la estrategia muy efectiva incluso en caso de disponer de un conocimiento muy imperfecto de la topología de la red. Aparte de la relevancia teórica para la comprensión de fenómenos colectivos en sistemas complejos, los métodos y resultados propuestos podrán dar lugar a aplicaciones en sistemas experimentales y tecnológicos, como por ejemplo los sistemas neuronales in vitro, el sistema nervioso central (en el estudio de actividades síncronas de carácter patológico), las redes eléctricas o los sistemas de comunicaciones. ABSTRACT The time evolution of an ensemble of dynamical systems coupled through an irregular interaction scheme gives rise to dynamics of great of complexity and emergent phenomena that cannot be predicted from the properties of the individual systems. The main objective of this thesis is precisely to increase our understanding of the interplay between the interaction topology and the collective dynamics that a complex network can support. This is a very broad subject, so in this thesis we will limit ourselves to the study of three relevant problems that have strong connections among them. First, it is a well-known fact that in many natural and manmade systems that can be represented as complex networks the topology is not static; rather, it depends on the dynamics taking place on the network (as it happens, for instance, in the neuronal networks in the brain). In these adaptive networks the topology itself emerges from the self-organization in the system. To better understand how the properties that are commonly observed in real networks spontaneously emerge, we have studied the behavior of systems that evolve according to local adaptive rules that are empirically motivated. Our numerical and analytical results show that self-organization brings about two of the most universally found properties in complex networks: at the mesoscopic scale, the appearance of a community structure, and, at the macroscopic scale, the existence of a power law in the weight distribution of the network interactions. The fact that these properties show up in two models with quantitatively different mechanisms that follow the same general adaptive principles suggests that our results may be generalized to other systems as well, and they may be behind the origin of these properties in some real systems. We also propose a new measure that provides a ranking of the elements in a network in terms of their relevance for the maintenance of collective dynamics. Specifically, we study the vulnerability of the elements under perturbations or large fluctuations, interpreted as a measure of the impact these external events have on the disruption of collective motion. Our results suggest that the dynamic vulnerability measure depends largely on local properties (our conclusions thus being valid for different topologies) and they show a non-trivial dependence of the vulnerability on the connectivity of the network elements. Finally, we propose a strategy for the imposition of generic goal dynamics on a given network, and we explore its performance in networks with different topologies that support turbulent dynamical regimes. It turns out that heterogeneous networks (and most real networks that have been studied belong in this category) are the most suitable for our strategy for the targeting of desired dynamics, the strategy being very effective even when the knowledge on the network topology is far from accurate. Aside from their theoretical relevance for the understanding of collective phenomena in complex systems, the methods and results here discussed might lead to applications in experimental and technological systems, such as in vitro neuronal systems, the central nervous system (where pathological synchronous activity sometimes occurs), communication systems or power grids.
Resumo:
Esta Tesis aborda los problemas de eficiencia de las redes eléctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes eléctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reducción de la infraestructura necesaria para suplir las mismas necesidades energéticas. Además, esta Tesis se enfrenta a un nuevo paradigma energético, donde la presencia de generación distribuida está muy extendida en las redes eléctricas, en particular, la generación fotovoltaica (FV). Este tipo de fuente energética afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetración de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red eléctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energética. Por lo tanto, no sólo se mejora la eficiencia de la red eléctrica, sino que también puede ser aumentada la penetración de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos económicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energético o un aumento de eficiencia son llamadas Gestión de la Demanda Eléctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energético y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE sólo usa información local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red eléctrica. Aunque esta afirmación pueda diferir de la definición general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energéticos Distribuidos (REDs). En este caso, la GDE está enfocada en la maximización del uso de la energía local, reduciéndose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestión energética, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energético. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, éstas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red eléctrica cuando el algoritmo de GDE está enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE sólo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinación entre las instalaciones. A través de esta coordinación, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto información local como de la red eléctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red eléctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clásicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir órdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestión paralela en lugar de una jerárquica como en las redes eléctricas clásicas. Esto implica que se requiere un mecanismo de coordinación entre instalaciones. Esta Tesis pretende minimizar la cantidad de información necesaria para esta coordinación. Para lograr este objetivo, se han utilizado dos técnicas de coordinación colectiva: osciladores acoplados e inteligencia de enjambre. La combinación de estas técnicas para llevar a cabo la coordinación de un sistema con las características de la red eléctrica es en sí mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinación no es sólo una contribución en el campo de la gestión energética, sino también en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre máximos y mínimos de la red eléctrica en proporción a la cantidad de energía controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energía controlada por el algoritmo, mayor es la mejora de eficiencia en la red eléctrica. Además de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solución distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes características del algoritmo de GDE propuesto: • Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestión de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalación no afecta el funcionamiento global de la red. • Privacidad de datos: el uso de una topología distribuida causa de que no hay un nodo central con información sensible de todos los consumidores. Esta Tesis va más allá y el algoritmo propuesto de GDE no utiliza información específica acerca de los comportamientos de los consumidores, siendo la coordinación entre las instalaciones completamente anónimos. • Escalabilidad: el algoritmo propuesto de GDE opera con cualquier número de instalaciones. Esto implica que se permite la incorporación de nuevas instalaciones sin afectar a su funcionamiento. • Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topológicos. Además, todas las instalaciones calculan su propia gestión con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cómputo. • Rápido despliegue: las características de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementación rápida. No se requiere una planificación compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: • Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. • Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. • Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. • Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. • Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.
Resumo:
Las transformaciones martensíticas (MT) se definen como un cambio en la estructura del cristal para formar una fase coherente o estructuras de dominio multivariante, a partir de la fase inicial con la misma composición, debido a pequeños intercambios o movimientos atómicos cooperativos. En el siglo pasado se han descubierto MT en diferentes materiales partiendo desde los aceros hasta las aleaciones con memoria de forma, materiales cerámicos y materiales inteligentes. Todos muestran propiedades destacables como alta resistencia mecánica, memoria de forma, efectos de superelasticidad o funcionalidades ferroicas como la piezoelectricidad, electro y magneto-estricción etc. Varios modelos/teorías se han desarrollado en sinergia con el desarrollo de la física del estado sólido para entender por qué las MT generan microstructuras muy variadas y ricas que muestran propiedades muy interesantes. Entre las teorías mejor aceptadas se encuentra la Teoría Fenomenológica de la Cristalografía Martensítica (PTMC, por sus siglas en inglés) que predice el plano de hábito y las relaciones de orientación entre la austenita y la martensita. La reinterpretación de la teoría PTMC en un entorno de mecánica del continuo (CM-PTMC) explica la formación de los dominios de estructuras multivariantes, mientras que la teoría de Landau con dinámica de inercia desentraña los mecanismos físicos de los precursores y otros comportamientos dinámicos. La dinámica de red cristalina desvela la reducción de la dureza acústica de las ondas de tensión de red que da lugar a transformaciones débiles de primer orden en el desplazamiento. A pesar de las diferencias entre las teorías estáticas y dinámicas dado su origen en diversas ramas de la física (por ejemplo mecánica continua o dinámica de la red cristalina), estas teorías deben estar inherentemente conectadas entre sí y mostrar ciertos elementos en común en una perspectiva unificada de la física. No obstante las conexiones físicas y diferencias entre las teorías/modelos no se han tratado hasta la fecha, aun siendo de importancia crítica para la mejora de modelos de MT y para el desarrollo integrado de modelos de transformaciones acopladas de desplazamiento-difusión. Por lo tanto, esta tesis comenzó con dos objetivos claros. El primero fue encontrar las conexiones físicas y las diferencias entre los modelos de MT mediante un análisis teórico detallado y simulaciones numéricas. El segundo objetivo fue expandir el modelo de Landau para ser capaz de estudiar MT en policristales, en el caso de transformaciones acopladas de desplazamiento-difusión, y en presencia de dislocaciones. Comenzando con un resumen de los antecedente, en este trabajo se presentan las bases físicas de los modelos actuales de MT. Su capacidad para predecir MT se clarifica mediante el ansis teórico y las simulaciones de la evolución microstructural de MT de cúbicoatetragonal y cúbicoatrigonal en 3D. Este análisis revela que el modelo de Landau con representación irreducible de la deformación transformada es equivalente a la teoría CM-PTMC y al modelo de microelasticidad para predecir los rasgos estáticos durante la MT, pero proporciona una mejor interpretación de los comportamientos dinámicos. Sin embargo, las aplicaciones del modelo de Landau en materiales estructurales están limitadas por su complejidad. Por tanto, el primer resultado de esta tesis es el desarrollo del modelo de Landau nolineal con representación irreducible de deformaciones y de la dinámica de inercia para policristales. La simulación demuestra que el modelo propuesto es consistente fcamente con el CM-PTMC en la descripción estática, y también permite una predicción del diagrama de fases con la clásica forma ’en C’ de los modos de nucleación martensítica activados por la combinación de temperaturas de enfriamiento y las condiciones de tensión aplicada correlacionadas con la transformación de energía de Landau. Posteriomente, el modelo de Landau de MT es integrado con un modelo de transformación de difusión cuantitativa para elucidar la relajación atómica y la difusión de corto alcance de los elementos durante la MT en acero. El modelo de transformaciones de desplazamiento y difusión incluye los efectos de la relajación en borde de grano para la nucleación heterogenea y la evolución espacio-temporal de potenciales de difusión y movilidades químicas mediante el acoplamiento de herramientas de cálculo y bases de datos termo-cinéticos de tipo CALPHAD. El modelo se aplica para estudiar la evolución microstructural de aceros al carbono policristalinos procesados por enfriamiento y partición (Q&P) en 2D. La microstructura y la composición obtenida mediante la simulación se comparan con los datos experimentales disponibles. Los resultados muestran el importante papel jugado por las diferencias en movilidad de difusión entre la fase austenita y martensita en la distibución de carbono en las aceros. Finalmente, un modelo multi-campo es propuesto mediante la incorporación del modelo de dislocación en grano-grueso al modelo desarrollado de Landau para incluir las diferencias morfológicas entre aceros y aleaciones con memoria de forma con la misma ruptura de simetría. La nucleación de dislocaciones, la formación de la martensita ’butterfly’, y la redistribución del carbono después del revenido son bien representadas en las simulaciones 2D del estudio de la evolución de la microstructura en aceros representativos. Con dicha simulación demostramos que incluyendo las dislocaciones obtenemos para dichos aceros, una buena comparación frente a los datos experimentales de la morfología de los bordes de macla, la existencia de austenita retenida dentro de la martensita, etc. Por tanto, basado en un modelo integral y en el desarrollo de códigos durante esta tesis, se ha creado una herramienta de modelización multiescala y multi-campo. Dicha herramienta acopla la termodinámica y la mecánica del continuo en la macroescala con la cinética de difusión y los modelos de campo de fase/Landau en la mesoescala, y también incluye los principios de la cristalografía y de la dinámica de red cristalina en la microescala. ABSTRACT Martensitic transformation (MT), in a narrow sense, is defined as the change of the crystal structure to form a coherent phase, or multi-variant domain structures out from a parent phase with the same composition, by small shuffles or co-operative movements of atoms. Over the past century, MTs have been discovered in different materials from steels to shape memory alloys, ceramics, and smart materials. They lead to remarkable properties such as high strength, shape memory/superelasticity effects or ferroic functionalities including piezoelectricity, electro- and magneto-striction, etc. Various theories/models have been developed, in synergy with development of solid state physics, to understand why MT can generate these rich microstructures and give rise to intriguing properties. Among the well-established theories, the Phenomenological Theory of Martensitic Crystallography (PTMC) is able to predict the habit plane and the orientation relationship between austenite and martensite. The re-interpretation of the PTMC theory within a continuum mechanics framework (CM-PTMC) explains the formation of the multivariant domain structures, while the Landau theory with inertial dynamics unravels the physical origins of precursors and other dynamic behaviors. The crystal lattice dynamics unveils the acoustic softening of the lattice strain waves leading to the weak first-order displacive transformation, etc. Though differing in statics or dynamics due to their origins in different branches of physics (e.g. continuum mechanics or crystal lattice dynamics), these theories should be inherently connected with each other and show certain elements in common within a unified perspective of physics. However, the physical connections and distinctions among the theories/models have not been addressed yet, although they are critical to further improving the models of MTs and to develop integrated models for more complex displacivediffusive coupled transformations. Therefore, this thesis started with two objectives. The first one was to reveal the physical connections and distinctions among the models of MT by means of detailed theoretical analyses and numerical simulations. The second objective was to expand the Landau model to be able to study MTs in polycrystals, in the case of displacive-diffusive coupled transformations, and in the presence of the dislocations. Starting with a comprehensive review, the physical kernels of the current models of MTs are presented. Their ability to predict MTs is clarified by means of theoretical analyses and simulations of the microstructure evolution of cubic-to-tetragonal and cubic-to-trigonal MTs in 3D. This analysis reveals that the Landau model with irreducible representation of the transformed strain is equivalent to the CM-PTMC theory and microelasticity model to predict the static features during MTs but provides better interpretation of the dynamic behaviors. However, the applications of the Landau model in structural materials are limited due its the complexity. Thus, the first result of this thesis is the development of a nonlinear Landau model with irreducible representation of strains and the inertial dynamics for polycrystals. The simulation demonstrates that the updated model is physically consistent with the CM-PTMC in statics, and also permits a prediction of a classical ’C shaped’ phase diagram of martensitic nucleation modes activated by the combination of quenching temperature and applied stress conditions interplaying with Landau transformation energy. Next, the Landau model of MT is further integrated with a quantitative diffusional transformation model to elucidate atomic relaxation and short range diffusion of elements during the MT in steel. The model for displacive-diffusive transformations includes the effects of grain boundary relaxation for heterogeneous nucleation and the spatio-temporal evolution of diffusion potentials and chemical mobility by means of coupling with a CALPHAD-type thermo-kinetic calculation engine and database. The model is applied to study for the microstructure evolution of polycrystalline carbon steels processed by the Quenching and Partitioning (Q&P) process in 2D. The simulated mixed microstructure and composition distribution are compared with available experimental data. The results show that the important role played by the differences in diffusion mobility between austenite and martensite to the partitioning in carbon steels. Finally, a multi-field model is proposed by incorporating the coarse-grained dislocation model to the developed Landau model to account for the morphological difference between steels and shape memory alloys with same symmetry breaking. The dislocation nucleation, the formation of the ’butterfly’ martensite, and the redistribution of carbon after tempering are well represented in the 2D simulations for the microstructure evolution of the representative steels. With the simulation, we demonstrate that the dislocations account for the experimental observation of rough twin boundaries, retained austenite within martensite, etc. in steels. Thus, based on the integrated model and the in-house codes developed in thesis, a preliminary multi-field, multiscale modeling tool is built up. The new tool couples thermodynamics and continuum mechanics at the macroscale with diffusion kinetics and phase field/Landau model at the mesoscale, and also includes the essentials of crystallography and crystal lattice dynamics at microscale.
Resumo:
El principal objetivo de esta tesis es el desarrollo de métodos de síntesis de diagramas de radiación de agrupaciones de antenas, en donde se realiza una caracterización electromagnética rigurosa de los elementos radiantes y de los acoplos mutuos existentes. Esta caracterización no se realiza habitualmente en la gran mayoría de métodos de síntesis encontrados en la literatura, debido fundamentalmente a dos razones. Por un lado, se considera que el diagrama de radiación de un array de antenas se puede aproximar con el factor de array que únicamente tiene en cuenta la posición de los elementos y las excitaciones aplicadas a los mismos. Sin embargo, como se mostrará en esta tesis, en múltiples ocasiones un riguroso análisis de los elementos radiantes y del acoplo mutuo entre ellos es importante ya que los resultados obtenidos pueden ser notablemente diferentes. Por otro lado, no es sencillo combinar un método de análisis electromagnético con un proceso de síntesis de diagramas de radiación. Los métodos de análisis de agrupaciones de antenas suelen ser costosos computacionalmente, ya que son estructuras grandes en términos de longitudes de onda. Generalmente, un diseño de un problema electromagnético suele comprender varios análisis de la estructura, dependiendo de las variaciones de las características, lo que hace este proceso muy costoso. Dos métodos se utilizan en esta tesis para el análisis de los arrays acoplados. Ambos están basados en el método de los elementos finitos, la descomposición de dominio y el análisis modal para analizar la estructura radiante y han sido desarrollados en el grupo de investigación donde se engloba esta tesis. El primero de ellos es una técnica de análisis de arrays finitos basado en la aproximación de array infinito. Su uso es indicado para arrays planos de grandes dimensiones con elementos equiespaciados. El segundo caracteriza el array y el acoplo mutuo entre elementos a partir de una expansión en modos esféricos del campo radiado por cada uno de los elementos. Este método calcula los acoplos entre los diferentes elementos del array usando las propiedades de traslación y rotación de los modos esféricos. Es capaz de analizar agrupaciones de elementos distribuidos de forma arbitraria. Ambas técnicas utilizan una formulación matricial que caracteriza de forma rigurosa el campo radiado por el array. Esto las hace muy apropiadas para su posterior uso en una herramienta de diseño, como los métodos de síntesis desarrollados en esta tesis. Los resultados obtenidos por estas técnicas de síntesis, que incluyen métodos rigurosos de análisis, son consecuentemente más precisos. La síntesis de arrays consiste en modificar uno o varios parámetros de las agrupaciones de antenas buscando unas determinadas especificaciones de las características de radiación. Los parámetros utilizados como variables de optimización pueden ser varios. Los más utilizados son las excitaciones aplicadas a los elementos, pero también es posible modificar otros parámetros de diseño como son las posiciones de los elementos o las rotaciones de estos. Los objetivos de las síntesis pueden ser dirigir el haz o haces en una determinada dirección o conformar el haz con formas arbitrarias. Además, es posible minimizar el nivel de los lóbulos secundarios o del rizado en las regiones deseadas, imponer nulos que evitan posibles interferencias o reducir el nivel de la componente contrapolar. El método para el análisis de arrays finitos basado en la aproximación de array infinito considera un array finito como un array infinito con un número finito de elementos excitados. Los elementos no excitados están físicamente presentes y pueden presentar tres diferentes terminaciones, corto-circuito, circuito abierto y adaptados. Cada una de estas terminaciones simulará mejor el entorno real en el que el array se encuentre. Este método de análisis se integra en la tesis con dos métodos diferentes de síntesis de diagramas de radiación. En el primero de ellos se presenta un método basado en programación lineal en donde es posible dirigir el haz o haces, en la dirección deseada, además de ejercer un control sobre los lóbulos secundarios o imponer nulos. Este método es muy eficiente y obtiene soluciones óptimas. El mismo método de análisis es también aplicado a un método de conformación de haz, en donde un problema originalmente no convexo (y de difícil solución) es transformado en un problema convexo imponiendo restricciones de simetría, resolviendo de este modo eficientemente un problema complejo. Con este método es posible diseñar diagramas de radiación con haces de forma arbitraria, ejerciendo un control en el rizado del lóbulo principal, así como en el nivel de los lóbulos secundarios. El método de análisis de arrays basado en la expansión en modos esféricos se integra en la tesis con tres técnicas de síntesis de diagramas de radiación. Se propone inicialmente una síntesis de conformación del haz basado en el método de la recuperación de fase resuelta de forma iterativa mediante métodos convexos, en donde relajando las restricciones del problema original se consiguen unas soluciones cercanas a las óptimas de manera eficiente. Dos métodos de síntesis se han propuesto, donde las variables de optimización son las posiciones y las rotaciones de los elementos respectivamente. Se define una función de coste basada en la intensidad de radiación, la cual es minimizada de forma iterativa con el método del gradiente. Ambos métodos reducen el nivel de los lóbulos secundarios minimizando una función de coste. El gradiente de la función de coste es obtenido en términos de la variable de optimización en cada método. Esta función de coste está formada por la expresión rigurosa de la intensidad de radiación y por una función de peso definida por el usuario para imponer prioridades sobre las diferentes regiones de radiación, si así se desea. Por último, se presenta un método en el cual, mediante técnicas de programación entera, se buscan las fases discretas que generan un diagrama de radiación lo más cercano posible al deseado. Con este método se obtienen diseños que minimizan el coste de fabricación. En cada uno de las diferentes técnicas propuestas en la tesis, se presentan resultados con elementos reales que muestran las capacidades y posibilidades que los métodos ofrecen. Se comparan los resultados con otros métodos disponibles en la literatura. Se muestra la importancia de tener en cuenta los diagramas de los elementos reales y los acoplos mutuos en el proceso de síntesis y se comparan los resultados obtenidos con herramientas de software comerciales. ABSTRACT The main objective of this thesis is the development of optimization methods for the radiation pattern synthesis of array antennas in which a rigorous electromagnetic characterization of the radiators and the mutual coupling between them is performed. The electromagnetic characterization is usually overlooked in most of the available synthesis methods in the literature, this is mainly due to two reasons. On the one hand, it is argued that the radiation pattern of an array is mainly influenced by the array factor and that the mutual coupling plays a minor role. As it is shown in this thesis, the mutual coupling and the rigorous characterization of the array antenna influences significantly in the array performance and its computation leads to differences in the results obtained. On the other hand, it is difficult to introduce an analysis procedure into a synthesis technique. The analysis of array antennas is generally expensive computationally as the structure to analyze is large in terms of wavelengths. A synthesis method requires to carry out a large number of analysis, this makes the synthesis problem very expensive computationally or intractable in some cases. Two methods have been used in this thesis for the analysis of coupled antenna arrays, both of them have been developed in the research group in which this thesis is involved. They are based on the finite element method (FEM), the domain decomposition and the modal analysis. The first one obtains a finite array characterization with the results obtained from the infinite array approach. It is specially indicated for the analysis of large arrays with equispaced elements. The second one characterizes the array elements and the mutual coupling between them with a spherical wave expansion of the radiated field by each element. The mutual coupling is computed using the properties of translation and rotation of spherical waves. This method is able to analyze arrays with elements placed on an arbitrary distribution. Both techniques provide a matrix formulation that makes them very suitable for being integrated in synthesis techniques, the results obtained from these synthesis methods will be very accurate. The array synthesis stands for the modification of one or several array parameters looking for some desired specifications of the radiation pattern. The array parameters used as optimization variables are usually the excitation weights applied to the array elements, but some other array characteristics can be used as well, such as the array elements positions or rotations. The desired specifications may be to steer the beam towards any specific direction or to generate shaped beams with arbitrary geometry. Further characteristics can be handled as well, such as minimize the side lobe level in some other radiating regions, to minimize the ripple of the shaped beam, to take control over the cross-polar component or to impose nulls on the radiation pattern to avoid possible interferences from specific directions. The analysis method based on the infinite array approach considers an infinite array with a finite number of excited elements. The infinite non-excited elements are physically present and may have three different terminations, short-circuit, open circuit and match terminated. Each of this terminations is a better simulation for the real environment of the array. This method is used in this thesis for the development of two synthesis methods. In the first one, a multi-objective radiation pattern synthesis is presented, in which it is possible to steer the beam or beams in desired directions, minimizing the side lobe level and with the possibility of imposing nulls in the radiation pattern. This method is very efficient and obtains optimal solutions as it is based on convex programming. The same analysis method is used in a shaped beam technique in which an originally non-convex problem is transformed into a convex one applying symmetry restrictions, thus solving a complex problem in an efficient way. This method allows the synthesis of shaped beam radiation patterns controlling the ripple in the mainlobe and the side lobe level. The analysis method based on the spherical wave expansion is applied for different synthesis techniques of the radiation pattern of coupled arrays. A shaped beam synthesis is presented, in which a convex formulation is proposed based on the phase retrieval method. In this technique, an originally non-convex problem is solved using a relaxation and solving a convex problems iteratively. Two methods are proposed based on the gradient method. A cost function is defined involving the radiation intensity of the coupled array and a weighting function that provides more degrees of freedom to the designer. The gradient of the cost function is computed with respect to the positions in one of them and the rotations of the elements in the second one. The elements are moved or rotated iteratively following the results of the gradient. A highly non-convex problem is solved very efficiently, obtaining very good results that are dependent on the starting point. Finally, an optimization method is presented where discrete digital phases are synthesized providing a radiation pattern as close as possible to the desired one. The problem is solved using linear integer programming procedures obtaining array designs that greatly reduce the fabrication costs. Results are provided for every method showing the capabilities that the above mentioned methods offer. The results obtained are compared with available methods in the literature. The importance of introducing a rigorous analysis into the synthesis method is emphasized and the results obtained are compared with a commercial software, showing good agreement.
Resumo:
βarrestins mediate the desensitization of the β2-adrenergic receptor (β2AR) and many other G protein-coupled receptors (GPCRs). Additionally, βarrestins initiate the endocytosis of these receptors via clathrin coated-pits and interact directly with clathrin. Consequently, it has been proposed that βarrestins serve as clathrin adaptors for the GPCR family by linking these receptors to clathrin lattices. AP-2, the heterotetrameric clathrin adaptor protein, has been demonstrated to mediate the internalization of many types of plasma membrane proteins other than GPCRs. AP-2 interacts with the clathrin heavy chain and cytoplasmic domains of receptors such as those for epidermal growth factor and transferrin. In the present study we demonstrate the formation of an agonist-induced multimeric complex containing a GPCR, βarrestin 2, and the β2-adaptin subunit of AP-2. β2-Adaptin binds βarrestin 2 in a yeast two-hybrid assay and coimmunoprecipitates with βarrestins and β2AR in an agonist-dependent manner in HEK-293 cells. Moreover, β2-adaptin translocates from the cytosol to the plasma membrane in response to the β2AR agonist isoproterenol and colocalizes with β2AR in clathrin-coated pits. Finally, expression of βarrestin 2 minigene constructs containing the β2-adaptin interacting region inhibits β2AR endocytosis. These findings point to a role for AP-2 in GPCR endocytosis, and they suggest that AP-2 functions as a clathrin adaptor for the endocytosis of diverse classes of membrane receptors.
Resumo:
Zinc finger domains are structures that mediate sequence recognition for a large number of DNA-binding proteins. These domains consist of sequences of amino acids containing cysteine and histidine residues tetrahedrally coordinated to a zinc ion. In this report, we present a means to selectively inhibit a zinc finger transcription factor with cobalt(III) Schiff-base complexes. 1H NMR spectroscopy confirmed that the structure of a zinc finger peptide is disrupted by axial ligation of the cobalt(III) complex to the nitrogen of the imidazole ring of a histidine residue. Fluorescence studies reveal that the zinc ion is displaced from the model zinc finger peptide in the presence of the cobalt complex. In addition, gel-shift and filter-binding assays reveal that cobalt complexes inhibit binding of a complete zinc finger protein, human transcription factor Sp1, to its consensus sequence. Finally, a DNA-coupled conjugate of the cobalt complexes selectively inhibited Sp1 in the presence of several other transcription factors.