20 resultados para optimal power flow problem

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo de esta tesis es la caracterización de la generación térmica representativa de la existente en la realidad, para posteriormente proceder a su modelización y simulación integrándolas en una red eléctrica tipo y llevar a cabo estudios de optimización multiobjetivo económico medioambiental. Para ello, en primera instancia se analiza el contexto energético y eléctrico actual, y más concretamente el peninsular, en el que habiendo desaparecido las centrales de fuelóleo, sólo quedan ciclos combinados y centrales de carbón de distinto rango. Seguidamente se lleva a cabo un análisis de los principales impactos medioambientales de las centrales eléctricas basadas en combustión, representados sobre todo por sus emisiones de CO2, SO2 y NOx, de las medidas de control y mitigación de las mismas y de la normativa que les aplica. A continuación, a partir de las características de los combustibles y de la información de los consumos específicos, se caracterizan los grupos térmicos frente a las funciones relevantes que definen su comportamiento energético, económico y medioambiental, en términos de funciones de salida horarias dependiendo de la carga. Se tiene en cuenta la posibilidad de desnitrificación y desulfuración. Dado que las funciones objetivo son múltiples, y que están en conflicto unas con otras, se ha optado por usar métodos multiobjetivo que son capaces de identificar el contorno de puntos óptimos o frente de Pareto, en los que tomando una solución no existe otra que lo mejore en alguna de las funciones objetivo sin empeorarlo en otra. Se analizaron varios métodos de optimización multiobjetivo y se seleccionó el de las ε constraint, capaz de encontrar frentes no convexos y cuya optimalidad estricta se puede comprobar. Se integró una representación equilibrada de centrales de antracita, hulla nacional e importada, lignito y ciclos combinados en la red tipo IEEE-57, en la que se puede trabajar con siete centrales sin distorsionar demasiado las potencias nominales reales de los grupos, y se programó en Matlab la resolución de flujos óptimos de carga en alterna con el método multiobjetivo integrado. Se identifican los frentes de Pareto de las combinaciones de coste y cada uno de los tres tipos de emisión, y también el de los cuatro objetivos juntos, obteniendo los resultados de costes óptimos del sistema para todo el rango de emisiones. Se valora cuánto le cuesta al sistema reducir una tonelada adicional de cualquier tipo de emisión a base de desplazarse a combinaciones de generación más limpias. Los puntos encontrados aseguran que bajo unas determinadas emisiones no pueden ser mejorados económicamente, o que atendiendo a ese coste no se puede reducir más allá el sistema en lo relativo a emisiones. También se indica cómo usar los frentes de Pareto para trazar estrategias óptimas de producción ante cambios horarios de carga. ABSTRACT The aim of this thesis is the characterization of electrical generation based on combustion processes representative of the actual power plants, for the latter modelling and simulation of an electrical grid and the development of economic- environmental multiobjective optimization studies. In this line, the first step taken is the analysis of the current energetic and electrical framework, focused on the peninsular one, where the fuel power plants have been shut down, and the only ones remaining are coal units of different types and combined cycle. Then it is carried out an analysis of the main environmental impacts of the thermal power plants, represented basically by the emissions of CO2, SO2 y NOx, their control and reduction measures and the applicable regulations. Next, based on the combustibles properties and the information about the units heat rates, the different power plants are characterized in relation to the outstanding functions that define their energy, economic and environmental behaviour, in terms of hourly output functions depending on their load. Optional denitrification and desulfurization is considered. Given that there are multiple objectives, and that they go in conflictive directions, it has been decided the use of multiobjective techniques, that have the ability of identifying the optimal points set, which is called the Pareto front, where taken a solution there will be no other point that can beat the former in an objective without worsening it in another objective. Several multiobjective optimization methods were analysed and pondered, selecting the ε constraint technique, which is able to find no convex fronts and it is opened to be tested to prove the strict Pareto optimality of the obtained solutions. A balanced representation of the thermal power plants, formed by anthracite, lignite, bituminous national and imported coals and combined cycle, was integrated in the IEEE-57 network case. This system was selected because it deals with a total power that will admit seven units without distorting significantly the actual size of the power plants. Next, an AC optimal power flow with the multiobjective method implemented in the routines was programmed. The Pareto fronts of the combination of operative costs with each of the three emissions functions were found, and also the front of all of them together. The optimal production costs of the system for all the emissions range were obtained. It is also evaluated the cost of reducing an additional emission ton of any of the emissions when the optimal production mix is displaced towards cleaner points. The obtained solutions assure that under a determined level of emissions they cannot be improved economically or, in the other way, at a determined cost it cannot be found points of lesser emissions. The Pareto fronts are also applied for the search of optimal strategic paths to follow the hourly load changes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent decades, full electric and hybrid electric vehicles have emerged as an alternative to conventional cars due to a range of factors, including environmental and economic aspects. These vehicles are the result of considerable efforts to seek ways of reducing the use of fossil fuel for vehicle propulsion. Sophisticated technologies such as hybrid and electric powertrains require careful study and optimization. Mathematical models play a key role at this point. Currently, many advanced mathematical analysis tools, as well as computer applications have been built for vehicle simulation purposes. Given the great interest of hybrid and electric powertrains, along with the increasing importance of reliable computer-based models, the author decided to integrate both aspects in the research purpose of this work. Furthermore, this is one of the first final degree projects held at the ETSII (Higher Technical School of Industrial Engineers) that covers the study of hybrid and electric propulsion systems. The present project is based on MBS3D 2.0, a specialized software for the dynamic simulation of multibody systems developed at the UPM Institute of Automobile Research (INSIA). Automobiles are a clear example of complex multibody systems, which are present in nearly every field of engineering. The work presented here benefits from the availability of MBS3D software. This program has proven to be a very efficient tool, with a highly developed underlying mathematical formulation. On this basis, the focus of this project is the extension of MBS3D features in order to be able to perform dynamic simulations of hybrid and electric vehicle models. This requires the joint simulation of the mechanical model of the vehicle, together with the model of the hybrid or electric powertrain. These sub-models belong to completely different physical domains. In fact the powertrain consists of energy storage systems, electrical machines and power electronics, connected to purely mechanical components (wheels, suspension, transmission, clutch…). The challenge today is to create a global vehicle model that is valid for computer simulation. Therefore, the main goal of this project is to apply co-simulation methodologies to a comprehensive model of an electric vehicle, where sub-models from different areas of engineering are coupled. The created electric vehicle (EV) model consists of a separately excited DC electric motor, a Li-ion battery pack, a DC/DC chopper converter and a multibody vehicle model. Co-simulation techniques allow car designers to simulate complex vehicle architectures and behaviors, which are usually difficult to implement in a real environment due to safety and/or economic reasons. In addition, multi-domain computational models help to detect the effects of different driving patterns and parameters and improve the models in a fast and effective way. Automotive designers can greatly benefit from a multidisciplinary approach of new hybrid and electric vehicles. In this case, the global electric vehicle model includes an electrical subsystem and a mechanical subsystem. The electrical subsystem consists of three basic components: electric motor, battery pack and power converter. A modular representation is used for building the dynamic model of the vehicle drivetrain. This means that every component of the drivetrain (submodule) is modeled separately and has its own general dynamic model, with clearly defined inputs and outputs. Then, all the particular submodules are assembled according to the drivetrain configuration and, in this way, the power flow across the components is completely determined. Dynamic models of electrical components are often based on equivalent circuits, where Kirchhoff’s voltage and current laws are applied to draw the algebraic and differential equations. Here, Randles circuit is used for dynamic modeling of the battery and the electric motor is modeled through the analysis of the equivalent circuit of a separately excited DC motor, where the power converter is included. The mechanical subsystem is defined by MBS3D equations. These equations consider the position, velocity and acceleration of all the bodies comprising the vehicle multibody system. MBS3D 2.0 is entirely written in MATLAB and the structure of the program has been thoroughly studied and understood by the author. MBS3D software is adapted according to the requirements of the applied co-simulation method. Some of the core functions are modified, such as integrator and graphics, and several auxiliary functions are added in order to compute the mathematical model of the electrical components. By coupling and co-simulating both subsystems, it is possible to evaluate the dynamic interaction among all the components of the drivetrain. ‘Tight-coupling’ method is used to cosimulate the sub-models. This approach integrates all subsystems simultaneously and the results of the integration are exchanged by function-call. This means that the integration is done jointly for the mechanical and the electrical subsystem, under a single integrator and then, the speed of integration is determined by the slower subsystem. Simulations are then used to show the performance of the developed EV model. However, this project focuses more on the validation of the computational and mathematical tool for electric and hybrid vehicle simulation. For this purpose, a detailed study and comparison of different integrators within the MATLAB environment is done. Consequently, the main efforts are directed towards the implementation of co-simulation techniques in MBS3D software. In this regard, it is not intended to create an extremely precise EV model in terms of real vehicle performance, although an acceptable level of accuracy is achieved. The gap between the EV model and the real system is filled, in a way, by introducing the gas and brake pedals input, which reflects the actual driver behavior. This input is included directly in the differential equations of the model, and determines the amount of current provided to the electric motor. For a separately excited DC motor, the rotor current is proportional to the traction torque delivered to the car wheels. Therefore, as it occurs in the case of real vehicle models, the propulsion torque in the mathematical model is controlled through acceleration and brake pedal commands. The designed transmission system also includes a reduction gear that adapts the torque coming for the motor drive and transfers it. The main contribution of this project is, therefore, the implementation of a new calculation path for the wheel torques, based on performance characteristics and outputs of the electric powertrain model. Originally, the wheel traction and braking torques were input to MBS3D through a vector directly computed by the user in a MATLAB script. Now, they are calculated as a function of the motor current which, in turn, depends on the current provided by the battery pack across the DC/DC chopper converter. The motor and battery currents and voltages are the solutions of the electrical ODE (Ordinary Differential Equation) system coupled to the multibody system. Simultaneously, the outputs of MBS3D model are the position, velocity and acceleration of the vehicle at all times. The motor shaft speed is computed from the output vehicle speed considering the wheel radius, the gear reduction ratio and the transmission efficiency. This motor shaft speed, somehow available from MBS3D model, is then introduced in the differential equations corresponding to the electrical subsystem. In this way, MBS3D and the electrical powertrain model are interconnected and both subsystems exchange values resulting as expected with tight-coupling approach.When programming mathematical models of complex systems, code optimization is a key step in the process. A way to improve the overall performance of the integration, making use of C/C++ as an alternative programming language, is described and implemented. Although this entails a higher computational burden, it leads to important advantages regarding cosimulation speed and stability. In order to do this, it is necessary to integrate MATLAB with another integrated development environment (IDE), where C/C++ code can be generated and executed. In this project, C/C++ files are programmed in Microsoft Visual Studio and the interface between both IDEs is created by building C/C++ MEX file functions. These programs contain functions or subroutines that can be dynamically linked and executed from MATLAB. This process achieves reductions in simulation time up to two orders of magnitude. The tests performed with different integrators, also reveal the stiff character of the differential equations corresponding to the electrical subsystem, and allow the improvement of the cosimulation process. When varying the parameters of the integration and/or the initial conditions of the problem, the solutions of the system of equations show better dynamic response and stability, depending on the integrator used. Several integrators, with variable and non-variable step-size, and for stiff and non-stiff problems are applied to the coupled ODE system. Then, the results are analyzed, compared and discussed. From all the above, the project can be divided into four main parts: 1. Creation of the equation-based electric vehicle model; 2. Programming, simulation and adjustment of the electric vehicle model; 3. Application of co-simulation methodologies to MBS3D and the electric powertrain subsystem; and 4. Code optimization and study of different integrators. Additionally, in order to deeply understand the context of the project, the first chapters include an introduction to basic vehicle dynamics, current classification of hybrid and electric vehicles and an explanation of the involved technologies such as brake energy regeneration, electric and non-electric propulsion systems for EVs and HEVs (hybrid electric vehicles) and their control strategies. Later, the problem of dynamic modeling of hybrid and electric vehicles is discussed. The integrated development environment and the simulation tool are also briefly described. The core chapters include an explanation of the major co-simulation methodologies and how they have been programmed and applied to the electric powertrain model together with the multibody system dynamic model. Finally, the last chapters summarize the main results and conclusions of the project and propose further research topics. In conclusion, co-simulation methodologies are applicable within the integrated development environments MATLAB and Visual Studio, and the simulation tool MBS3D 2.0, where equation-based models of multidisciplinary subsystems, consisting of mechanical and electrical components, are coupled and integrated in a very efficient way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper resumes the results obtained applying various implementations of the direct boundary element method (BEM) to the solution of the Laplace Equation governing the potential flow problem during everyday service manoeuvres of high-speed trains. In particular the results of train passing events at three different speed combinations are presented. Some recommendations are given in order to reduce calculation times which as is demonstrated can be cut down to not exceed reasonable limits even when using nowadays office PCs. Thus the method is shown to be a very valuable tool for the design engineer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

García et al. present a class of column generation (CG) algorithms for nonlinear programs. Its main motivation from a theoretical viewpoint is that under some circumstances, finite convergence can be achieved, in much the same way as for the classic simplicial decomposition method; the main practical motivation is that within the class there are certain nonlinear column generation problems that can accelerate the convergence of a solution approach which generates a sequence of feasible points. This algorithm can, for example, accelerate simplicial decomposition schemes by making the subproblems nonlinear. This paper complements the theoretical study on the asymptotic and finite convergence of these methods given in [1] with an experimental study focused on their computational efficiency. Three types of numerical experiments are conducted. The first group of test problems has been designed to study the parameters involved in these methods. The second group has been designed to investigate the role and the computation of the prolongation of the generated columns to the relative boundary. The last one has been designed to carry out a more complete investigation of the difference in computational efficiency between linear and nonlinear column generation approaches. In order to carry out this investigation, we consider two types of test problems: the first one is the nonlinear, capacitated single-commodity network flow problem of which several large-scale instances with varied degrees of nonlinearity and total capacity are constructed and investigated, and the second one is a combined traffic assignment model

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this dissertation a new numerical method for solving Fluid-Structure Interaction (FSI) problems in a Lagrangian framework is developed, where solids of different constitutive laws can suffer very large deformations and fluids are considered to be newtonian and incompressible. For that, we first introduce a meshless discretization based on local maximum-entropy interpolants. This allows to discretize a spatial domain with no need of tessellation, avoiding the mesh limitations. Later, the Stokes flow problem is studied. The Galerkin meshless method based on a max-ent scheme for this problem suffers from instabilities, and therefore stabilization techniques are discussed and analyzed. An unconditionally stable method is finally formulated based on a Douglas-Wang stabilization. Then, a Langrangian expression for fluid mechanics is derived. This allows us to establish a common framework for fluid and solid domains, such that interaction can be naturally accounted. The resulting equations are also in the need of stabilization, what is corrected with an analogous technique as for the Stokes problem. The fully Lagrangian framework for fluid/solid interaction is completed with simple point-to-point and point-to-surface contact algorithms. The method is finally validated, and some numerical examples show the potential scope of applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The railway planning problem is usually studied from two different points of view: macroscopic and microscopic. We propose a macroscopic approach for the high-speed rail scheduling problem where competitive effects are introduced. We study train frequency planning, timetable planning and rolling stock assignment problems and model the problem as a multi-commodity network flow problem considering competitive transport markets. The aim of the presented model is to maximize the total operator profit. We solve the optimization model using realistic probleminstances obtained from the network of the Spanish railwa operator RENFE, including other transport modes in Spain

Relevância:

50.00% 50.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The cutoff frequencies of an EMI filter are normally given by the noise attenuation requirements the filter has to fulfill. In order to select the component values of the filter elements, i.e. inductances and capacitances, an additional design criterium is needed. In this paper the effect of the EMI filter input and output impedances are considered. The input impedance influences the filters effect on the system displacement power factor and the output impedance plays a key role in the system stability. The effect of filter element values, the number of filter stages as well as additional damping networks are considered and a design procedure is provided. For this analysis a two-port description of the input filters employing ABCD-parameters is used.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Computational fluid dynamic (CFD) methods are used in this paper to predict the power production from entire wind farms in complex terrain and to shed some light into the wake flow patterns. Two full three-dimensional Navier–Stokes solvers for incompressible fluid flow, employing k − ϵ and k − ω turbulence closures, are used. The wind turbines are modeled as momentum absorbers by means of their thrust coefficient through the actuator disk approach. Alternative methods for estimating the reference wind speed in the calculation of the thrust are tested. The work presented in this paper is part of the work being undertaken within the UpWind Integrated Project that aims to develop the design tools for next generation of large wind turbines. In this part of UpWind, the performance of wind farm and wake models is being examined in complex terrain environment where there are few pre-existing relevant measurements. The focus of the work being carried out is to evaluate the performance of CFD models in large wind farm applications in complex terrain and to examine the development of the wakes in a complex terrain environment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Europe needs to restructure its energy system. The aim to decrease the reliance on fossil fuels to a higher dependence on renewable energy has now been imposed by The European Commission. In order to achieve this goal there is a great interest in Norway to become "The Green Battery of Europe". In the pursuit of this goal a GIS-tool was created to investigate the pump storage potential in Norway. The tool searches for possible connections between existing reservoirs and dams with the criteria selected by the user. The aim of this thesis was to test the tool and see if the results suggested were plausible, develop a cost calculation method for the PSH lines, and make suggestions for further development of the tool. During the process the tool presented many non-feasible pumped storage hydropower (PSH) connections. The area of Telemark was chosen for the more detailed study. The results were discussed and some improvements were suggested for further development of the tool. Also a sensitivity test was done to see which of the parameters set by the user are the most relevant for the PSH connection suggestion. From a range of the most promising PSH plants suggested by the tool, the one between Songavatn and Totak was chosen for a case study, where there already exists a power plant between both reservoirs. A new Pumped Storage Plant was designed with a power production of 1200 MW. There are still many topics open to discussion, such as how to deal with environmental restrictions, or how to deal with inflows and outflows of the reservoirs from the existing power plants. Consequently the GIS-tool can be a very useful tool to establish the best possible connections between existing reservoirs and dams, but it still needs a deep study and the creation of new parameters for the user.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Leakage power consumption is a com- ponent of the total power consumption in data cen- ters that is not traditionally considered in the set- point temperature of the room. However, the effect of this power component, increased with temperature, can determine the savings associated with the careful management of the cooling system, as well as the re- liability of the system. The work presented in this paper detects the need of addressing leakage power in order to achieve substantial savings in the energy consumption of servers. In particular, our work shows that, by a careful detection and management of two working regions (low and high impact of thermal- dependent leakage), energy consumption of the data- center can be optimized by a reduction of the cooling budget.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Electricity price forecasting is an interesting problem for all the agents involved in electricity market operation. For instance, every profit maximisation strategy is based on the computation of accurate one-day-ahead forecasts, which is why electricity price forecasting has been a growing field of research in recent years. In addition, the increasing concern about environmental issues has led to a high penetration of renewable energies, particularly wind. In some European countries such as Spain, Germany and Denmark, renewable energy is having a deep impact on the local power markets. In this paper, we propose an optimal model from the perspective of forecasting accuracy, and it consists of a combination of several univariate and multivariate time series methods that account for the amount of energy produced with clean energies, particularly wind and hydro, which are the most relevant renewable energy sources in the Iberian Market. This market is used to illustrate the proposed methodology, as it is one of those markets in which wind power production is more relevant in terms of its percentage of the total demand, but of course our method can be applied to any other liberalised power market. As far as our contribution is concerned, first, the methodology proposed by García-Martos et al(2007 and 2012) is generalised twofold: we allow the incorporation of wind power production and hydro reservoirs, and we do not impose the restriction of using the same model for 24h. A computational experiment and a Design of Experiments (DOE) are performed for this purpose. Then, for those hours in which there are two or more models without statistically significant differences in terms of their forecasting accuracy, a combination of forecasts is proposed by weighting the best models(according to the DOE) and minimising the Mean Absolute Percentage Error (MAPE). The MAPE is the most popular accuracy metric for comparing electricity price forecasting models. We construct the combi nation of forecasts by solving several nonlinear optimisation problems that allow computation of the optimal weights for building the combination of forecasts. The results are obtained by a large computational experiment that entails calculating out-of-sample forecasts for every hour in every day in the period from January 2007 to Decem ber 2009. In addition, to reinforce the value of our methodology, we compare our results with those that appear in recent published works in the field. This comparison shows the superiority of our methodology in terms of forecasting accuracy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

El requerimiento de proveer alta frecuencia de datos en los modernos sistema de comunicación inalámbricos resulta en complejas señales moduladas de radio-frequencia (RF) con un gran ancho de banda y alto ratio pico-promedio (PAPR). Para garantizar la linealidad del comportamiento, los amplificadores lineales de potencia comunes funcionan típicamente entre 4 y 10 dB de back-o_ desde la máxima potencia de salida, ocasionando una baja eficiencia del sistema. La eliminación y restauración de la evolvente (EER) y el seguimiento de la evolvente (ET) son dos prometedoras técnicas para resolver el problema de la eficiencia. Tanto en EER como en ET, es complicado diseñar un amplificador de potencia que sea eficiente para señales de RF de alto ancho de banda y alto PAPR. Una propuesta común para los amplificadores de potencia es incluir un convertidor de potencia de muy alta eficiencia operando a frecuencias más altas que el ancho de banda de la señal RF. En este caso, la potencia perdida del convertidor ocasionado por la alta frecuencia desaconseja su práctica cuando el ancho de banda es muy alto. La solución a este problema es el enfoque de esta disertación que presenta dos arquitecturas de amplificador evolvente: convertidor híbrido-serie con una técnica de evolvente lenta y un convertidor multinivel basado en un convertidor reductor multifase con control de tiempo mínimo. En la primera arquitectura, una topología híbrida está compuesta de una convertidor reductor conmutado y un regulador lineal en serie que trabajan juntos para ajustar la tensión de salida para seguir a la evolvente con precisión. Un algoritmo de generación de una evolvente lenta crea una forma de onda con una pendiente limitada que es menor que la pendiente máxima de la evolvente original. La salida del convertidor reductor sigue esa forma de onda en vez de la evolvente original usando una menor frecuencia de conmutación, porque la forma de onda no sólo tiene una pendiente reducida sino también un menor ancho de banda. De esta forma, el regulador lineal se usa para filtrar la forma de onda tiene una pérdida de potencia adicional. Dependiendo de cuánto se puede reducir la pendiente de la evolvente para producir la forma de onda, existe un trade-off entre la pérdida de potencia del convertidor reductor relacionada con la frecuencia de conmutación y el regulador lineal. El punto óptimo referido a la menor pérdida de potencia total del amplificador de evolvente es capaz de identificarse con la ayuda de modelo preciso de pérdidas que es una combinación de modelos comportamentales y analíticos de pérdidas. Además, se analiza el efecto en la respuesta del filtro de salida del convertidor reductor. Un filtro de dampeo paralelo extra es necesario para eliminar la oscilación resonante del filtro de salida porque el convertidor reductor opera en lazo abierto. La segunda arquitectura es un amplificador de evolvente de seguimiento de tensión multinivel. Al contrario que los convertidores que usan multi-fuentes, un convertidor reductor multifase se emplea para generar la tensión multinivel. En régimen permanente, el convertidor reductor opera en puntos del ciclo de trabajo con cancelación completa del rizado. El número de niveles de tensión es igual al número de fases de acuerdo a las características del entrelazamiento del convertidor reductor. En la transición, un control de tiempo mínimo (MTC) para convertidores multifase es novedosamente propuesto y desarrollado para cambiar la tensión de salida del convertidor reductor entre diferentes niveles. A diferencia de controles convencionales de tiempo mínimo para convertidores multifase con inductancia equivalente, el propuesto MTC considera el rizado de corriente por cada fase basado en un desfase fijo que resulta en diferentes esquemas de control entre las fases. La ventaja de este control es que todas las corrientes vuelven a su fase en régimen permanente después de la transición para que la siguiente transición pueda empezar muy pronto, lo que es muy favorable para la aplicación de seguimiento de tensión multinivel. Además, el control es independiente de la carga y no es afectado por corrientes de fase desbalanceadas. Al igual que en la primera arquitectura, hay una etapa lineal con la misma función, conectada en serie con el convertidor reductor multifase. Dado que tanto el régimen permanente como el estado de transición del convertidor no están fuertemente relacionados con la frecuencia de conmutación, la frecuencia de conmutación puede ser reducida para el alto ancho de banda de la evolvente, la cual es la principal consideración de esta arquitectura. La optimización de la segunda arquitectura para más alto anchos de banda de la evolvente es presentada incluyendo el diseño del filtro de salida, la frecuencia de conmutación y el número de fases. El área de diseño del filtro está restringido por la transición rápida y el mínimo pulso del hardware. La rápida transición necesita un filtro pequeño pero la limitación del pulso mínimo del hardware lleva el diseño en el sentido contrario. La frecuencia de conmutación del convertidor afecta principalmente a la limitación del mínimo pulso y a las pérdidas de potencia. Con una menor frecuencia de conmutación, el ancho de pulso en la transición es más pequeño. El número de fases relativo a la aplicación específica puede ser optimizado en términos de la eficiencia global. Otro aspecto de la optimización es mejorar la estrategia de control. La transición permite seguir algunas partes de la evolvente que son más rápidas de lo que el hardware puede soportar al precio de complejidad. El nuevo método de sincronización de la transición incrementa la frecuencia de la transición, permitiendo que la tensión multinivel esté más cerca de la evolvente. Ambas estrategias permiten que el convertidor pueda seguir una evolvente con un ancho de banda más alto que la limitación de la etapa de potencia. El modelo de pérdidas del amplificador de evolvente se ha detallado y validado mediante medidas. El mecanismo de pérdidas de potencia del convertidor reductor tiene que incluir las transiciones en tiempo real, lo cual es diferente del clásico modelos de pérdidas de un convertidor reductor síncrono. Este modelo estima la eficiencia del sistema y juega un papel muy importante en el proceso de optimización. Finalmente, la segunda arquitectura del amplificador de evolvente se integra con el amplificador de clase F. La medida del sistema EER prueba el ahorro de energía con el amplificador de evolvente propuesto sin perjudicar la linealidad del sistema. ABSTRACT The requirement of delivering high data rates in modern wireless communication systems results in complex modulated RF signals with wide bandwidth and high peak-to-average ratio (PAPR). In order to guarantee the linearity performance, the conventional linear power amplifiers typically work at 4 to 10 dB back-off from the maximum output power, leading to low system efficiency. The envelope elimination and restoration (EER) and envelope tracking (ET) are two promising techniques to overcome the efficiency problem. In both EER and ET, it is challenging to design efficient envelope amplifier for wide bandwidth and high PAPR RF signals. An usual approach for envelope amplifier includes a high-efficiency switching power converter operating at a frequency higher than the RF signal's bandwidth. In this case, the power loss of converter caused by high switching operation becomes unbearable for system efficiency when signal bandwidth is very wide. The solution of this problem is the focus of this dissertation that presents two architectures of envelope amplifier: a hybrid series converter with slow-envelope technique and a multilevel converter based on a multiphase buck converter with the minimum time control. In the first architecture, a hybrid topology is composed of a switched buck converter and a linear regulator in series that work together to adjust the output voltage to track the envelope with accuracy. A slow envelope generation algorithm yields a waveform with limited slew rate that is lower than the maximum slew rate of the original envelope. The buck converter's output follows this waveform instead of the original envelope using lower switching frequency, because the waveform has not only reduced slew rate but also reduced bandwidth. In this way, the linear regulator used to filter the waveform has additional power loss. Depending on how much reduction of the slew rate of envelope in order to obtain that waveform, there is a trade-off between the power loss of buck converter related to the switching frequency and the power loss of linear regulator. The optimal point referring to the lowest total power loss of this envelope amplifier is identified with the help of a precise power loss model that is a combination of behavioral and analytic loss model. In addition, the output filter's effect on the response is analyzed. An extra parallel damping filter is needed to eliminate the resonant oscillation of output filter L and C, because the buck converter operates in open loop. The second architecture is a multilevel voltage tracking envelope amplifier. Unlike the converters using multi-sources, a multiphase buck converter is employed to generate the multilevel voltage. In the steady state, the buck converter operates at complete ripple cancellation points of duty cycle. The number of the voltage levels is equal to the number of phases according the characteristics of interleaved buck converter. In the transition, a minimum time control (MTC) for multiphase converter is originally proposed and developed for changing the output voltage of buck converter between different levels. As opposed to conventional minimum time control for multiphase converter with equivalent inductance, the proposed MTC considers the current ripple of each phase based on the fixed phase shift resulting in different control schemes among the phases. The advantage of this control is that all the phase current return to the steady state after the transition so that the next transition can be triggered very soon, which is very favorable for the application of multilevel voltage tracking. Besides, the control is independent on the load condition and not affected by the unbalance of phase current. Like the first architecture, there is also a linear stage with the same function, connected in series with the multiphase buck converter. Since both steady state and transition state of the converter are not strongly related to the switching frequency, it can be reduced for wide bandwidth envelope which is the main consideration of this architecture. The optimization of the second architecture for wider bandwidth envelope is presented including the output filter design, switching frequency and the number of phases. The filter design area is restrained by fast transition and the minimum pulse of hardware. The fast transition needs small filter but the minimum pulse of hardware limitation pushes the filter in opposite way. The converter switching frequency mainly affects the minimum pulse limitation and the power loss. With lower switching frequency, the pulse width in the transition is smaller. The number of phases related to specific application can be optimized in terms of overall efficiency. Another aspect of optimization is improving control strategy. Transition shift allows tracking some parts of envelope that are faster than the hardware can support at the price of complexity. The new transition synchronization method increases the frequency of transition, allowing the multilevel voltage to be closer to the envelope. Both control strategies push the converter to track wider bandwidth envelope than the limitation of power stage. The power loss model of envelope amplifier is detailed and validated by measurements. The power loss mechanism of buck converter has to include the transitions in real time operation, which is different from classical power loss model of synchronous buck converter. This model estimates the system efficiency and play a very important role in optimization process. Finally, the second envelope amplifier architecture is integrated with a Class F amplifier. EER system measurement proves the power saving with the proposed envelope amplifier without disrupting the linearity performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The efficiency of a Power Plant is affected by the distribution of the pulverized coal within the furnace. The coal, which is pulverized in the mills, is transported and distributed by the primary gas through the mill-ducts to the interior of the furnace. This is done with a double function: dry and enter the coal by different levels for optimizing the combustion in the sense that a complete combustion occurs with homogeneous heat fluxes to the walls. The mill-duct systems of a real Power Plant are very complex and they are not yet well understood. In particular, experimental data concerning the mass flows of coal to the different levels are very difficult to measure. CFD modeling can help to determine them. An Eulerian/Lagrangian approach is used due to the low solid–gas volume ratio.