910 resultados para Benefits, Distributed Generators, Power Systems


Relevância:

50.00% 50.00%

Publicador:

Resumo:

The objective of the present article is to assess and compare the performance of electricity generation systems integrated with downdraft biomass gasifiers for distributed power generation. A model for estimating the electric power generation of internal combustion engines and gas turbines powered by syngas was developed. First, the model determines the syngas composition and the lower heating value; and second, these data are used to evaluate power generation in Otto, Diesel, and Brayton cycles. Four synthesis gas compositions were tested for gasification with: air; pure oxygen; 60% oxygen with 40% steam; and 60% air with 40% steam. The results show a maximum power ratio of 0.567 kWh/Nm(3) for the gas turbine system, 0.647 kWh/Nm(3) for the compression ignition engine, and 0.775 kWh/Nm(3) for the spark-ignition engine while running on synthesis gas which was produced using pure oxygen as gasification agent. When these three systems run on synthesis gas produced using atmospheric air as gasification agent, the maximum power ratios were 0.274 kWh/Nm(3) for the gas turbine system, 0.302 kWh/Nm(3) for CIE, and 0.282 kWh/Nm(3) for SIE. The relationship between power output and synthesis gas flow variations is presented as is the dependence of efficiency on compression ratios. Since the maximum attainable power ratio of CIE is higher than that of SIE for gasification with air, more research should be performed on utilization of synthesis gas in CIE. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Fundao de Amparo Pesquisa do Estado de So Paulo (FAPESP)

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In the framework of the micro-CHP (Combined Heat and Power) energy systems and the Distributed Generation (GD) concept, an Integrated Energy System (IES) able to meet the energy and thermal requirements of specific users, using different types of fuel to feed several micro-CHP energy sources, with the integration of electric generators of renewable energy sources (RES), electrical and thermal storage systems and the control system was conceived and built. A 5 kWel Polymer Electrolyte Membrane Fuel Cell (PEMFC) has been studied. Using experimental data obtained from various measurement campaign, the electrical and CHP PEMFC system performance have been determinate. The analysis of the effect of the water management of the anodic exhaust at variable FC loads has been carried out, and the purge process programming logic was optimized, leading also to the determination of the optimal flooding times by varying the AC FC power delivered by the cell. Furthermore, the degradation mechanisms of the PEMFC system, in particular due to the flooding of the anodic side, have been assessed using an algorithm that considers the FC like a black box, and it is able to determine the amount of not-reacted H2 and, therefore, the causes which produce that. Using experimental data that cover a two-year time span, the ageing suffered by the FC system has been tested and analyzed.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Modern software systems, in particular distributed ones, are everywhere around us and are at the basis of our everyday activities. Hence, guaranteeing their cor- rectness, consistency and safety is of paramount importance. Their complexity makes the verification of such properties a very challenging task. It is natural to expect that these systems are reliable and above all usable. i) In order to be reliable, compositional models of software systems need to account for consistent dynamic reconfiguration, i.e., changing at runtime the communication patterns of a program. ii) In order to be useful, compositional models of software systems need to account for interaction, which can be seen as communication patterns among components which collaborate together to achieve a common task. The aim of the Ph.D. was to develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems appeared to be an adequate methodology, considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like, deadlock or livelock freedom in a concurrent setting. The main contributions of this dissertation are twofold. i) On the components side: we design types and a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations related to modifications of communication patterns in a program during execution time. ii) On the communication side: we study advanced safety properties related to communication in complex distributed systems like deadlock-freedom, livelock- freedom and progress. Most importantly, we exploit an encoding of types and terms of a typical distributed language, session -calculus, into the standard typed - calculus, in order to understand their expressive power.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or signicant nancial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Javas popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specication for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) prole was developed to dene a robust subset of the language that is amenable to static analysis for high-integrity system certication. Currently, a specication under the Java community process (JSR- 302) is being developed. Its purpose is to dene those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its proles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to dene appropriate abstractions to overcome this problem. Currently there is no formal specication. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ prole. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the denition of a computational model which identies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by dening two phases and a specic threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol dening a network interface and modules. The JRMP protocol was modied to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez ms importantes para la sociedad. Su demanda aumenta y cada vez ms dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar prdida de vidas humanas, daos en el medio ambiente o cuantiosas prdidas econmicas. La necesidad de satisfacer requisitos temporales estrictos, hace ms complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso tcnicas adecuadas en su diseo, mantenimiento y certicacin. En concreto, se requiere una tecnologa exible e independiente del hardware. La evolucin de las redes y paradigmas de comunicacin, as como la necesidad de mayor potencia de cmputo y de tolerancia a fallos, ha motivado la interconexin de dispositivos electrnicos. Los mecanismos de comunicacin permiten la transferencia de datos con alta velocidad de transmisin. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecucin. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseo. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar donde se ejecutan puede variar. El lenguaje de programacin Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specication for Java), que es una extensin del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitacin importante dado que la mayora de los actuales y futuros sistemas sern distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el n de denir las abstracciones que aborden dicha limitacin, pero en la actualidad aun no existe una especicacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integracin entre el modelo de RMI (Remote Method Invocation) y el perl HRTJ. Ha sido diseado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la conabilidad del comportamiento temporal y el uso de recursos. El diseo parte de la denicin de un modelo computacional el cual identica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes ms adecuados, el modelo de anlisis, y un subconjunto de Java para sistemas de tiempo real crtico. En el diseo, las referencias remotas son el medio bsico para construccin de aplicaciones distribuidas las cuales son asociadas a todos los parmetros no funcionales y los recursos necesarios para la ejecucin de invocaciones remotas sncronas o asncronas con atributos de tiempo real. El middleware propuesto separa la asignacin de recursos de la propia ejecucin deniendo dos fases y un mecanismo de hebras especico que garantiza un comportamiento temporal adecuado. Adems se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red deniendo una interfaz de red y mdulos especcos. Tambin se ha modicado el protocolo JRMP para incluir diferentes fases, parmetros no funcionales y optimizaciones de los tamaos de los mensajes. Aunque la serializacin es una de las operaciones fundamentales para asegurar la adecuada transmisin de datos, las actuales implementaciones no son adecuadas para sistemas crticos y no hay alternativas. Este trabajo propone una serializacin predecible que ha implicado el desarrollo de un nuevo compilador para la generacin de cdigo optimizado acorde al modelo computacional. La solucin propuesta tiene la ventaja que en tiempo de compilacin nos permite planicar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseo e implementacin se ha llevado a cabo un exigente proceso de validacin con nfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicacin industrial desarrollada por Thales Avionics (un sistema de gestin de vuelo) y en las pruebas exhaustivas han demostrado que el diseo y el prototipo son ables para aplicaciones industriales con estrictos requisitos temporales.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The use of modular or micro maximum power point tracking (MPPT) converters at module level in series association, commercially known as power optimizers, allows the individual adaptation of each panel to the load, solving part of the problems related to partial shadows and different tilt and/or orientation angles of the photovoltaic (PV) modules. This is particularly relevant in building integrated PV systems. This paper presents useful behavioural analytical studies of cascade MPPT converters and evaluation test results of a prototype developed under a Spanish national research project. On the one hand, this work focuses on the development of new useful expressions which can be used to identify the behaviour of individual MPPT converters applied to each module and connected in series, in a typical grid-connected PV system. On the other hand, a novel characterization method of MPPT converters is developed, and experimental results of the prototype are obtained: when individual partial shading is applied, and they are connected in a typical grid connected PV array

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the targets position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is always necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the receiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the networks lifetime is significantly improved. Resumen La proliferacin de las redes inalmbricas de sensores junto con la gran variedad de posibles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor inters entre la comunidad cientfica es la de localization, donde el conjunto de nodos de la red intenta estimar la posicin de un blanco localizado dentro de su rea de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energa de la seal recibida (RSSI por sus siglas en ingls) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de seal recibida no sigue una relacin lineal con la posicin del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partculas, mientas que en otras se basan en esquemas mucho ms simples pero con menor precisin. Adems, en muchos casos las estrategias son centralizadas lo que resulta poco prcticos para su implementacin en redes de sensores. Desde un punto de vista prctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisin. En esta lnea, en lugar de abordar directamente el problema de la estimacin de la posicin del blanco bajo el criterio de mxima verosimilitud, proponemos usar una formulacin subptima del problema ms manejable analticamente y que ofrece la ventaja de permitir encontrar la solucin al problema de localization de una forma totalmente distribuida, convirtindola as en una solucin atractiva dentro del contexto de redes inalmbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algoritmos de consenso y de optimizacin convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisin se propone una estrategia que consiste en la optimizacin local de la funcin de verosimilitud entorno a la estimacin inicialmente obtenida. Esta optimizacin se puede realizar de forma descentralizada usando una versin basada en consenso del mtodo de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicacin subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos sumidero, (sink en ingls) que acten como centros recolectores de informacin y que estarn equipados con hardware adicional que les permita la interaccin con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a trfico y capacidad de clculo. Como alternativa se pueden usar tcnicas cooperativas de conformacin de haz (beamforming en ingls) de manera que el conjunto de la red puede verse como un nico sistema virtual de mltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comunicaciones con mltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el receptor. No obstante, las actuales tcnicas se basan en resultados promedios y asintticos, cuando el nmero de nodos es muy grande. Para una configuracin especfica se pierde el control sobre el diagrama de radiacin causando posibles interferencias sobre sistemas coexistentes o gastando ms potencia de la requerida. La eficiencia energtica es una cuestin capital en las redes inalmbricas de sensores ya que los nodos estn equipados con bateras. Es por tanto muy importante preservar la batera evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformacin de haz que maximice el tiempo de vida til de la red, entendiendo como tal el mximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en ingls) que permitan una decodificacin fiable de la seal recibida en la estacin base. Se proponen adems algoritmos distribuidos que convergen a la solucin centralizada. Inicialmente se considera que la nica causa de consumo energtico se debe a las comunicaciones con la estacin base. Este modelo de consumo energtico es modificado para tener en cuenta otras formas de consumo de energa derivadas de procesos inherentes al funcionamiento de la red como la adquisicin y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energa se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilstico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energtica.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Esta Tesis aborda los problemas de eficiencia de las redes elctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes elctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reduccin de la infraestructura necesaria para suplir las mismas necesidades energticas. Adems, esta Tesis se enfrenta a un nuevo paradigma energtico, donde la presencia de generacin distribuida est muy extendida en las redes elctricas, en particular, la generacin fotovoltaica (FV). Este tipo de fuente energtica afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetracin de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red elctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energtica. Por lo tanto, no slo se mejora la eficiencia de la red elctrica, sino que tambin puede ser aumentada la penetracin de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos econmicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energtico o un aumento de eficiencia son llamadas Gestin de la Demanda Elctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energtico y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE slo usa informacin local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red elctrica. Aunque esta afirmacin pueda diferir de la definicin general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energticos Distribuidos (REDs). En este caso, la GDE est enfocada en la maximizacin del uso de la energa local, reducindose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestin energtica, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energtico. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, stas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red elctrica cuando el algoritmo de GDE est enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE slo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinacin entre las instalaciones. A travs de esta coordinacin, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto informacin local como de la red elctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red elctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clsicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir rdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestin paralela en lugar de una jerrquica como en las redes elctricas clsicas. Esto implica que se requiere un mecanismo de coordinacin entre instalaciones. Esta Tesis pretende minimizar la cantidad de informacin necesaria para esta coordinacin. Para lograr este objetivo, se han utilizado dos tcnicas de coordinacin colectiva: osciladores acoplados e inteligencia de enjambre. La combinacin de estas tcnicas para llevar a cabo la coordinacin de un sistema con las caractersticas de la red elctrica es en s mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinacin no es slo una contribucin en el campo de la gestin energtica, sino tambin en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre mximos y mnimos de la red elctrica en proporcin a la cantidad de energa controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energa controlada por el algoritmo, mayor es la mejora de eficiencia en la red elctrica. Adems de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solucin distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes caractersticas del algoritmo de GDE propuesto: Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestin de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalacin no afecta el funcionamiento global de la red. Privacidad de datos: el uso de una topologa distribuida causa de que no hay un nodo central con informacin sensible de todos los consumidores. Esta Tesis va ms all y el algoritmo propuesto de GDE no utiliza informacin especfica acerca de los comportamientos de los consumidores, siendo la coordinacin entre las instalaciones completamente annimos. Escalabilidad: el algoritmo propuesto de GDE opera con cualquier nmero de instalaciones. Esto implica que se permite la incorporacin de nuevas instalaciones sin afectar a su funcionamiento. Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topolgicos. Adems, todas las instalaciones calculan su propia gestin con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cmputo. Rpido despliegue: las caractersticas de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementacin rpida. No se requiere una planificacin compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The search for new energy models arises as a necessity to have a sustainable power supply. The inclusion of distributed generation sources (DG) allows to reduce the cost of facilities, increase the security of the grid or alleviate problems of congestion through the redistribution of power flows. In remote microgrids it is needed in a particular way a safe and reliable supply, which can cover the demand for a low cost; due to this, distributed generation is an alternative that is being widely introduced in these grids. But the remote microgrids are especially weak grids because of their small size, low voltage level, reduced network mesh and distribution lines with a high ratio R/X. This ratio affects the coupling between grid voltages and phase shifts, and stability becomes an issue of greater importance than in interconnected systems. To ensure the appropriate behavior of generation sources inserted in remote microgrids -and, in general, any electrical equipment-, it is essential to have devices for testing and certification. These devices must, not only faithfully reproduce disturbances occurring in remote microgrids, but also to behave against the equipment under test (EUT) as a real weak grid. This also makes the device commercially competitive. To meet these objectives and based on the aforementioned, it has been designed, built and tested a voltage disturbances generator, in order to provide a simple, versatile, full and easily scalable device to manufacturers and laboratories in the sector.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Esta tesis doctoral se enmarca dentro del campo de los sistemas embebidos reconfigurables, redes de sensores inalmbricas para aplicaciones de altas prestaciones, y computacin distribuida. El documento se centra en el estudio de alternativas de procesamiento para sistemas embebidos autnomos distribuidos de altas prestaciones (por sus siglas en ingls, High-Performance Autonomous Distributed Systems (HPADS)), as como su evolucin hacia el procesamiento de alta resolucin. El estudio se ha llevado a cabo tanto a nivel de plataforma como a nivel de las arquitecturas de procesamiento dentro de la plataforma con el objetivo de optimizar aspectos tan relevantes como la eficiencia energtica, la capacidad de cmputo y la tolerancia a fallos del sistema. Los HPADS son sistemas realimentados, normalmente formados por elementos distribuidos conectados o no en red, con cierta capacidad de adaptacin, y con inteligencia suficiente para llevar a cabo labores de prognosis y/o autoevaluacin. Esta clase de sistemas suele formar parte de sistemas ms complejos llamados sistemas ciber-fsicos (por sus siglas en ingls, Cyber-Physical Systems (CPSs)). Los CPSs cubren un espectro enorme de aplicaciones, yendo desde aplicaciones mdicas, fabricacin, o aplicaciones aeroespaciales, entre otras muchas. Para el diseo de este tipo de sistemas, aspectos tales como la confiabilidad, la definicin de modelos de computacin, o el uso de metodologas y/o herramientas que faciliten el incremento de la escalabilidad y de la gestin de la complejidad, son fundamentales. La primera parte de esta tesis doctoral se centra en el estudio de aquellas plataformas existentes en el estado del arte que por sus caractersticas pueden ser aplicables en el campo de los CPSs, as como en la propuesta de un nuevo diseo de plataforma de altas prestaciones que se ajuste mejor a los nuevos y ms exigentes requisitos de las nuevas aplicaciones. Esta primera parte incluye descripcin, implementacin y validacin de la plataforma propuesta, as como conclusiones sobre su usabilidad y sus limitaciones. Los principales objetivos para el diseo de la plataforma propuesta se enumeran a continuacin: Estudiar la viabilidad del uso de una FPGA basada en RAM como principal procesador de la plataforma en cuanto a consumo energtico y capacidad de cmputo. Propuesta de tcnicas de gestin del consumo de energa en cada etapa del perfil de trabajo de la plataforma. Propuestas para la inclusin de reconfiguracin dinmica y parcial de la FPGA (por sus siglas en ingls, Dynamic Partial Reconfiguration (DPR)) de forma que sea posible cambiar ciertas partes del sistema en tiempo de ejecucin y sin necesidad de interrumpir al resto de las partes. Evaluar su aplicabilidad en el caso de HPADS. Las nuevas aplicaciones y nuevos escenarios a los que se enfrentan los CPSs, imponen nuevos requisitos en cuanto al ancho de banda necesario para el procesamiento de los datos, as como en la adquisicin y comunicacin de los mismos, adems de un claro incremento en la complejidad de los algoritmos empleados. Para poder cumplir con estos nuevos requisitos, las plataformas estn migrando desde sistemas tradicionales uni-procesador de 8 bits, a sistemas hbridos hardware-software que incluyen varios procesadores, o varios procesadores y lgica programable. Entre estas nuevas arquitecturas, las FPGAs y los sistemas en chip (por sus siglas en ingls, System on Chip (SoC)) que incluyen procesadores embebidos y lgica programable, proporcionan soluciones con muy buenos resultados en cuanto a consumo energtico, precio, capacidad de cmputo y flexibilidad. Estos buenos resultados son an mejores cuando las aplicaciones tienen altos requisitos de cmputo y cuando las condiciones de trabajo son muy susceptibles de cambiar en tiempo real. La plataforma propuesta en esta tesis doctoral se ha denominado HiReCookie. La arquitectura incluye una FPGA basada en RAM como nico procesador, as como un diseo compatible con la plataforma para redes de sensores inalmbricas desarrollada en el Centro de Electrnica Industrial de la Universidad Politcnica de Madrid (CEI-UPM) conocida como Cookies. Esta FPGA, modelo Spartan-6 LX150, era, en el momento de inicio de este trabajo, la mejor opcin en cuanto a consumo y cantidad de recursos integrados, cuando adems, permite el uso de reconfiguracin dinmica y parcial. Es importante resaltar que aunque los valores de consumo son los mnimos para esta familia de componentes, la potencia instantnea consumida sigue siendo muy alta para aquellos sistemas que han de trabajar distribuidos, de forma autnoma, y en la mayora de los casos alimentados por bateras. Por esta razn, es necesario incluir en el diseo estrategias de ahorro energtico para incrementar la usabilidad y el tiempo de vida de la plataforma. La primera estrategia implementada consiste en dividir la plataforma en distintas islas de alimentacin de forma que slo aquellos elementos que sean estrictamente necesarios permanecern alimentados, cuando el resto puede estar completamente apagado. De esta forma es posible combinar distintos modos de operacin y as optimizar enormemente el consumo de energa. El hecho de apagar la FPGA para ahora energa durante los periodos de inactividad, supone la prdida de la configuracin, puesto que la memoria de configuracin es una memoria voltil. Para reducir el impacto en el consumo y en el tiempo que supone la reconfiguracin total de la plataforma una vez encendida, en este trabajo, se incluye una tcnica para la compresin del archivo de configuracin de la FPGA, de forma que se consiga una reduccin del tiempo de configuracin y por ende de la energa consumida. Aunque varios de los requisitos de diseo pueden satisfacerse con el diseo de la plataforma HiReCookie, es necesario seguir optimizando diversos parmetros tales como el consumo energtico, la tolerancia a fallos y la capacidad de procesamiento. Esto slo es posible explotando todas las posibilidades ofrecidas por la arquitectura de procesamiento en la FPGA. Por lo tanto, la segunda parte de esta tesis doctoral est centrada en el diseo de una arquitectura reconfigurable denominada ARTICo3 (Arquitectura Reconfigurable para el Tratamiento Inteligente de Cmputo, Confiabilidad y Consumo de energa) para la mejora de estos parmetros por medio de un uso dinmico de recursos. ARTICo3 es una arquitectura de procesamiento para FPGAs basadas en RAM, con comunicacin tipo bus, preparada para dar soporte para la gestin dinmica de los recursos internos de la FPGA en tiempo de ejecucin gracias a la inclusin de reconfiguracin dinmica y parcial. Gracias a esta capacidad de reconfiguracin parcial, es posible adaptar los niveles de capacidad de procesamiento, energa consumida o tolerancia a fallos para responder a las demandas de la aplicacin, entorno, o mtricas internas del dispositivo mediante la adaptacin del nmero de recursos asignados para cada tarea. Durante esta segunda parte de la tesis se detallan el diseo de la arquitectura, su implementacin en la plataforma HiReCookie, as como en otra familia de FPGAs, y su validacin por medio de diferentes pruebas y demostraciones. Los principales objetivos que se plantean la arquitectura son los siguientes: Proponer una metodologa basada en un enfoque multi-hilo, como las propuestas por CUDA (por sus siglas en ingls, Compute Unified Device Architecture) u Open CL, en la cual distintos kernels, o unidades de ejecucin, se ejecuten en un numero variable de aceleradores hardware sin necesidad de cambios en el cdigo de aplicacin. Proponer un diseo y proporcionar una arquitectura en la que las condiciones de trabajo cambien de forma dinmica dependiendo bien de parmetros externos o bien de parmetros que indiquen el estado de la plataforma. Estos cambios en el punto de trabajo de la arquitectura sern posibles gracias a la reconfiguracin dinmica y parcial de aceleradores hardware en tiempo real. Explotar las posibilidades de procesamiento concurrente, incluso en una arquitectura basada en bus, por medio de la optimizacin de las transacciones en rfaga de datos hacia los aceleradores. Aprovechar las ventajas ofrecidas por la aceleracin lograda por mdulos puramente hardware para conseguir una mejor eficiencia energtica. Ser capaces de cambiar los niveles de redundancia de hardware de forma dinmica segn las necesidades del sistema en tiempo real y sin cambios para el cdigo de aplicacin. Proponer una capa de abstraccin entre el cdigo de aplicacin y el uso dinmico de los recursos de la FPGA. El diseo en FPGAs permite la utilizacin de mdulos hardware especficamente creados para una aplicacin concreta. De esta forma es posible obtener rendimientos mucho mayores que en el caso de las arquitecturas de propsito general. Adems, algunas FPGAs permiten la reconfiguracin dinmica y parcial de ciertas partes de su lgica en tiempo de ejecucin, lo cual dota al diseo de una gran flexibilidad. Los fabricantes de FPGAs ofrecen arquitecturas predefinidas con la posibilidad de aadir bloques prediseados y poder formar sistemas en chip de una forma ms o menos directa. Sin embargo, la forma en la que estos mdulos hardware estn organizados dentro de la arquitectura interna ya sea esttica o dinmicamente, o la forma en la que la informacin se intercambia entre ellos, influye enormemente en la capacidad de cmputo y eficiencia energtica del sistema. De la misma forma, la capacidad de cargar mdulos hardware bajo demanda, permite aadir bloques redundantes que permitan aumentar el nivel de tolerancia a fallos de los sistemas. Sin embargo, la complejidad ligada al diseo de bloques hardware dedicados no debe ser subestimada. Es necesario tener en cuenta que el diseo de un bloque hardware no es slo su propio diseo, sino tambin el diseo de sus interfaces, y en algunos casos de los drivers software para su manejo. Adems, al aadir ms bloques, el espacio de diseo se hace ms complejo, y su programacin ms difcil. Aunque la mayora de los fabricantes ofrecen interfaces predefinidas, IPs (por sus siglas en ingls, Intelectual Property) comerciales y plantillas para ayudar al diseo de los sistemas, para ser capaces de explotar las posibilidades reales del sistema, es necesario construir arquitecturas sobre las ya establecidas para facilitar el uso del paralelismo, la redundancia, y proporcionar un entorno que soporte la gestin dinmica de los recursos. Para proporcionar este tipo de soporte, ARTICo3 trabaja con un espacio de soluciones formado por tres ejes fundamentales: computacin, consumo energtico y confiabilidad. De esta forma, cada punto de trabajo se obtiene como una solucin de compromiso entre estos tres parmetros. Mediante el uso de la reconfiguracin dinmica y parcial y una mejora en la transmisin de los datos entre la memoria principal y los aceleradores, es posible dedicar un nmero variable de recursos en el tiempo para cada tarea, lo que hace que los recursos internos de la FPGA sean virtualmente ilimitados. Este variacin en el tiempo del nmero de recursos por tarea se puede usar bien para incrementar el nivel de paralelismo, y por ende de aceleracin, o bien para aumentar la redundancia, y por lo tanto el nivel de tolerancia a fallos. Al mismo tiempo, usar un numero ptimo de recursos para una tarea mejora el consumo energtico ya que bien es posible disminuir la potencia instantnea consumida, o bien el tiempo de procesamiento. Con el objetivo de mantener los niveles de complejidad dentro de unos lmites lgicos, es importante que los cambios realizados en el hardware sean totalmente transparentes para el cdigo de aplicacin. A este respecto, se incluyen distintos niveles de transparencia: Transparencia a la escalabilidad: los recursos usados por una misma tarea pueden ser modificados sin que el cdigo de aplicacin sufra ningn cambio. Transparencia al rendimiento: el sistema aumentara su rendimiento cuando la carga de trabajo aumente, sin cambios en el cdigo de aplicacin. Transparencia a la replicacin: es posible usar mltiples instancias de un mismo mdulo bien para aadir redundancia o bien para incrementar la capacidad de procesamiento. Todo ello sin que el cdigo de aplicacin cambie. Transparencia a la posicin: la posicin fsica de los mdulos hardware es arbitraria para su direccionamiento desde el cdigo de aplicacin. Transparencia a los fallos: si existe un fallo en un mdulo hardware, gracias a la redundancia, el cdigo de aplicacin tomar directamente el resultado correcto. Transparencia a la concurrencia: el hecho de que una tarea sea realizada por ms o menos bloques es transparente para el cdigo que la invoca. Por lo tanto, esta tesis doctoral contribuye en dos lneas diferentes. En primer lugar, con el diseo de la plataforma HiReCookie y en segundo lugar con el diseo de la arquitectura ARTICo3. Las principales contribuciones de esta tesis se resumen a continuacin. Arquitectura de la HiReCookie incluyendo: o Compatibilidad con la plataforma Cookies para incrementar las capacidades de esta. o Divisin de la arquitectura en distintas islas de alimentacin. o Implementacin de los diversos modos de bajo consumo y polticas de despertado del nodo. o Creacin de un archivo de configuracin de la FPGA comprimido para reducir el tiempo y el consumo de la configuracin inicial. Diseo de la arquitectura reconfigurable para FPGAs basadas en RAM ARTICo3: o Modelo de computacin y modos de ejecucin inspirados en el modelo de CUDA pero basados en hardware reconfigurable con un nmero variable de bloques de hilos por cada unidad de ejecucin. o Estructura para optimizar las transacciones de datos en rfaga proporcionando datos en cascada o en paralelo a los distinto mdulos incluyendo un proceso de votado por mayora y operaciones de reduccin. o Capa de abstraccin entre el procesador principal que incluye el cdigo de aplicacin y los recursos asignados para las diferentes tareas. o Arquitectura de los mdulos hardware reconfigurables para mantener la escalabilidad aadiendo una la interfaz para las nuevas funcionalidades con un simple acceso a una memoria RAM interna. o Caracterizacin online de las tareas para proporcionar informacin a un mdulo de gestin de recursos para mejorar la operacin en trminos de energa y procesamiento cuando adems se opera entre distintos nieles de tolerancia a fallos. El documento est dividido en dos partes principales formando un total de cinco captulos. En primer lugar, despus de motivar la necesidad de nuevas plataformas para cubrir las nuevas aplicaciones, se detalla el diseo de la plataforma HiReCookie, sus partes, las posibilidades para bajar el consumo energtico y se muestran casos de uso de la plataforma as como pruebas de validacin del diseo. La segunda parte del documento describe la arquitectura reconfigurable, su implementacin en varias FPGAs, y pruebas de validacin en trminos de capacidad de procesamiento y consumo energtico, incluyendo cmo estos aspectos se ven afectados por el nivel de tolerancia a fallos elegido. Los captulos a lo largo del documento son los siguientes: El captulo 1 analiza los principales objetivos, motivacin y aspectos tericos necesarios para seguir el resto del documento. El captulo 2 est centrado en el diseo de la plataforma HiReCookie y sus posibilidades para disminuir el consumo de energa. El captulo 3 describe la arquitectura reconfigurable ARTICo3. El captulo 4 se centra en las pruebas de validacin de la arquitectura usando la plataforma HiReCookie para la mayora de los tests. Un ejemplo de aplicacin es mostrado para analizar el funcionamiento de la arquitectura. El captulo 5 concluye esta tesis doctoral comentando las conclusiones obtenidas, las contribuciones originales del trabajo y resultados y lneas futuras. ABSTRACT This PhD Thesis is framed within the field of dynamically reconfigurable embedded systems, advanced sensor networks and distributed computing. The document is centred on the study of processing solutions for high-performance autonomous distributed systems (HPADS) as well as their evolution towards High performance Computing (HPC) systems. The approach of the study is focused on both platform and processor levels to optimise critical aspects such as computing performance, energy efficiency and fault tolerance. HPADS are considered feedback systems, normally networked and/or distributed, with real-time adaptive and predictive functionality. These systems, as part of more complex systems known as Cyber-Physical Systems (CPSs), can be applied in a wide range of fields such as military, health care, manufacturing, aerospace, etc. For the design of HPADS, high levels of dependability, the definition of suitable models of computation, and the use of methodologies and tools to support scalability and complexity management, are required. The first part of the document studies the different possibilities at platform design level in the state of the art, together with description, development and validation tests of the platform proposed in this work to cope with the previously mentioned requirements. The main objectives targeted by this platform design are the following: Study the feasibility of using SRAM-based FPGAs as the main processor of the platform in terms of energy consumption and performance for high demanding applications. Analyse and propose energy management techniques to reduce energy consumption in every stage of the working profile of the platform. Provide a solution with dynamic partial and wireless remote HW reconfiguration (DPR) to be able to change certain parts of the FPGA design at run time and on demand without interrupting the rest of the system. Demonstrate the applicability of the platform in different test-bench applications. In order to select the best approach for the platform design in terms of processing alternatives, a study of the evolution of the state-of-the-art platforms is required to analyse how different architectures cope with new more demanding applications and scenarios: security, mixed-critical systems for aerospace, multimedia applications, or military environments, among others. In all these scenarios, important changes in the required processing bandwidth or the complexity of the algorithms used are provoking the migration of the platforms from single microprocessor architectures to multiprocessing and heterogeneous solutions with more instant power consumption but higher energy efficiency. Within these solutions, FPGAs and Systems on Chip including FPGA fabric and dedicated hard processors, offer a good trade of among flexibility, processing performance, energy consumption and price, when they are used in demanding applications where working conditions are very likely to vary over time and high complex algorithms are required. The platform architecture proposed in this PhD Thesis is called HiReCookie. It includes an SRAM-based FPGA as the main and only processing unit. The FPGA selected, the Xilinx Spartan-6 LX150, was at the beginning of this work the best choice in terms of amount of resources and power. Although, the power levels are the lowest of these kind of devices, they can be still very high for distributed systems that normally work powered by batteries. For that reason, it is necessary to include different energy saving possibilities to increase the usability of the platform. In order to reduce energy consumption, the platform architecture is divided into different power islands so that only those parts of the systems that are strictly needed are powered on, while the rest of the islands can be completely switched off. This allows a combination of different low power modes to decrease energy. In addition, one of the most important handicaps of SRAM-based FPGAs is that they are not alive at power up. Therefore, recovering the system from a switch-off state requires to reload the FPGA configuration from a non-volatile memory device. For that reason, this PhD Thesis also proposes a methodology to compress the FPGA configuration file in order to reduce time and energy during the initial configuration process. Although some of the requirements for the design of HPADS are already covered by the design of the HiReCookie platform, it is necessary to continue improving energy efficiency, computing performance and fault tolerance. This is only possible by exploiting all the opportunities provided by the processing architectures configured inside the FPGA. Therefore, the second part of the thesis details the design of the so called ARTICo3 FPGA architecture to enhance the already intrinsic capabilities of the FPGA. ARTICo3 is a DPR-capable bus-based virtual architecture for multiple HW acceleration in SRAM-based FPGAs. The architecture provides support for dynamic resource management in real time. In this way, by using DPR, it will be possible to change the levels of computing performance, energy consumption and fault tolerance on demand by increasing or decreasing the amount of resources used by the different tasks. Apart from the detailed design of the architecture and its implementation in different FPGA devices, different validation tests and comparisons are also shown. The main objectives targeted by this FPGA architecture are listed as follows: Provide a method based on a multithread approach such as those offered by CUDA (Compute Unified Device Architecture) or OpenCL kernel executions, where kernels are executed in a variable number of HW accelerators without requiring application code changes. Provide an architecture to dynamically adapt working points according to either self-measured or external parameters in terms of energy consumption, fault tolerance and computing performance. Taking advantage of DPR capabilities, the architecture must provide support for a dynamic use of resources in real time. Exploit concurrent processing capabilities in a standard bus-based system by optimizing data transactions to and from HW accelerators. Measure the advantage of HW acceleration as a technique to boost performance to improve processing times and save energy by reducing active times for distributed embedded systems. Dynamically change the levels of HW redundancy to adapt fault tolerance in real time. Provide HW abstraction from SW application design. FPGAs give the possibility of designing specific HW blocks for every required task to optimise performance while some of them include the possibility of including DPR. Apart from the possibilities provided by manufacturers, the way these HW modules are organised, addressed and multiplexed in area and time can improve computing performance and energy consumption. At the same time, fault tolerance and security techniques can also be dynamically included using DPR. However, the inherent complexity of designing new HW modules for every application is not negligible. It does not only consist of the HW description, but also the design of drivers and interfaces with the rest of the system, while the design space is widened and more complex to define and program. Even though the tools provided by the majority of manufacturers already include predefined bus interfaces, commercial IPs, and templates to ease application prototyping, it is necessary to improve these capabilities. By adding new architectures on top of them, it is possible to take advantage of parallelization and HW redundancy while providing a framework to ease the use of dynamic resource management. ARTICo3 works within a solution space where working points change at run time in a 3D space defined by three different axes: Computation, Consumption, and Fault Tolerance. Therefore, every working point is found as a trade-off solution among these three axes. By means of DPR, different accelerators can be multiplexed so that the amount of available resources for any application is virtually unlimited. Taking advantage of DPR capabilities and a novel way of transmitting data to the reconfigurable HW accelerators, it is possible to dedicate a dynamically-changing number of resources for a given task in order to either boost computing speed or adding HW redundancy and a voting process to increase fault-tolerance levels. At the same time, using an optimised amount of resources for a given task reduces energy consumption by reducing instant power or computing time. In order to keep level complexity under certain limits, it is important that HW changes are transparent for the application code. Therefore, different levels of transparency are targeted by the system: Scalability transparency: a task must be able to expand its resources without changing the system structure or application algorithms. Performance transparency: the system must reconfigure itself as load changes. Replication transparency: multiple instances of the same task are loaded to increase reliability and performance. Location transparency: resources are accessed with no knowledge of their location by the application code. Failure transparency: task must be completed despite a failure in some components. Concurrency transparency: different tasks will work in a concurrent way transparent to the application code. Therefore, as it can be seen, the Thesis is contributing in two different ways. First with the design of the HiReCookie platform and, second with the design of the ARTICo3 architecture. The main contributions of this PhD Thesis are then listed below: Architecture of the HiReCookie platform including: o Compatibility of the processing layer for high performance applications with the Cookies Wireless Sensor Network platform for fast prototyping and implementation. o A division of the architecture in power islands. o All the different low-power modes. o The creation of the partial-initial bitstream together with the wake-up policies of the node. The design of the reconfigurable architecture for SRAM FPGAs: ARTICo3: o A model of computation and execution modes inspired in CUDA but based on reconfigurable HW with a dynamic number of thread blocks per kernel. o A structure to optimise burst data transactions providing coalesced or parallel data to HW accelerators, parallel voting process and reduction operation. o The abstraction provided to the host processor with respect to the operation of the kernels in terms of the number of replicas, modes of operation, location in the reconfigurable area and addressing. o The architecture of the modules representing the thread blocks to make the system scalable by adding functional units only adding an access to a BRAM port. o The online characterization of the kernels to provide information to a scheduler or resource manager in terms of energy consumption and processing time when changing among different fault-tolerance levels, as well as if a kernel is expected to work in the memory-bounded or computing-bounded areas. The document of the Thesis is divided into two main parts with a total of five chapters. First, after motivating the need for new platforms to cover new more demanding applications, the design of the HiReCookie platform, its parts and several partial tests are detailed. The design of the platform alone does not cover all the needs of these applications. Therefore, the second part describes the architecture inside the FPGA, called ARTICo3, proposed in this PhD Thesis. The architecture and its implementation are tested in terms of energy consumption and computing performance showing different possibilities to improve fault tolerance and how this impact in energy and time of processing. Chapter 1 shows the main goals of this PhD Thesis and the technology background required to follow the rest of the document. Chapter 2 shows all the details about the design of the FPGA-based platform HiReCookie. Chapter 3 describes the ARTICo3 architecture. Chapter 4 is focused on the validation tests of the ARTICo3 architecture. An application for proof of concept is explained where typical kernels related to image processing and encryption algorithms are used. Further experimental analyses are performed using these kernels. Chapter 5 concludes the document analysing conclusions, comments about the contributions of the work, and some possible future lines for the work.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper proposes a transmission and wheeling pricing method based on the monetary flow tracing along power flow paths: the monetary flow-monetary path method. Active and reactive power flows are converted into monetary flows by using nodal prices. The method introduces a uniform measurement for transmission service usages by active and reactive powers. Because monetary flows are related to the nodal prices, the impacts of generators and loads on operation constraints and the interactive impacts between active and reactive powers can be considered. Total transmission service cost is separated into more practical line-related costs and system-wide cost, and can be flexibly distributed between generators and loads. The method is able to reconcile transmission service cost fairly and to optimize transmission system operation and development. The case study on the IEEE 30 bus test system shows that the proposed pricing method is effective in creating economic signals towards the efficient use and operation of the transmission system. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The increase in renewable energy generators introduced into the electricity grid is putting pressure on its stability and management as predictions of renewable energy sources cannot be accurate or fully controlled. This, with the additional pressure of fluctuations in demand, presents a problem more complex than the current methods of controlling electricity distribution were designed for. A global approximate and distributed optimisation method for power allocation that accommodates uncertainties and volatility is suggested and analysed. It is based on a probabilistic method known as message passing [1], which has deep links to statistical physics methodology. This principled method of optimisation is based on local calculations and inherently accommodates uncertainties; it is of modest computational complexity and provides good approximate solutions.We consider uncertainty and fluctuations drawn from a Gaussian distribution and incorporate them into the message-passing algorithm. We see the effect that increasing uncertainty has on the transmission cost and how the placement of volatile nodes within a grid, such as renewable generators or consumers, effects it.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this paper, a new open-winding control strategy is proposed for a brushless doubly-fed reluctance generator (BDFRG) applicable for wind turbines. The BDFRG control winding is fed via a dual two-level three-phase converter using a single dc bus. Direct power control based on maximum power point tracking with common mode voltage elimination is designed, which not only the active and reactive power is decoupled, but the reliability and redundancy are all improved greatly by increasing the switching modes of operation, while DC-link voltage and rating of power devices decreased by 50% comparing to the traditional three-level converter systems. Consequently its effectiveness is evaluated by simulation tests based on a 42-kW prototype generator.