33 resultados para high-average power laser crystal


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy management has always been recognized as a challenge in mobile systems, especially in modern OS-based mobile systems where multi-functioning are widely supported. Nowadays, it is common for a mobile system user to run multiple applications simultaneously while having a target battery lifetime in mind for a specific application. Traditional OS-level power management (PM) policies make their best effort to save energy under performance constraint, but fail to guarantee a target lifetime, leaving the painful trading off between the total performance of applications and the target lifetime to the user itself. This thesis provides a new way to deal with the problem. It is advocated that a strong energy-aware PM scheme should first guarantee a user-specified battery lifetime to a target application by restricting the average power of those less important applications, and in addition to that, maximize the total performance of applications without harming the lifetime guarantee. As a support, energy, instead of CPU or transmission bandwidth, should be globally managed as the first-class resource by the OS. As the first-stage work of a complete PM scheme, this thesis presents the energy-based fair queuing scheduling, a novel class of energy-aware scheduling algorithms which, in combination with a mechanism of battery discharge rate restricting, systematically manage energy as the first-class resource with the objective of guaranteeing a user-specified battery lifetime for a target application in OS-based mobile systems. Energy-based fair queuing is a cross-application of the traditional fair queuing in the energy management domain. It assigns a power share to each task, and manages energy by proportionally serving energy to tasks according to their assigned power shares. The proportional energy use establishes proportional share of the system power among tasks, which guarantees a minimum power for each task and thus, avoids energy starvation on any task. Energy-based fair queuing treats all tasks equally as one type and supports periodical time-sensitive tasks by allocating each of them a share of system power that is adequate to meet the highest energy demand in all periods. However, an overly conservative power share is usually required to guarantee the meeting of all time constraints. To provide more effective and flexible support for various types of time-sensitive tasks in general purpose operating systems, an extra real-time friendly mechanism is introduced to combine priority-based scheduling into the energy-based fair queuing. Since a method is available to control the maximum time one time-sensitive task can run with priority, the power control and time-constraint meeting can be flexibly traded off. A SystemC-based test-bench is designed to assess the algorithms. Simulation results show the success of the energy-based fair queuing in achieving proportional energy use, time-constraint meeting, and a proper trading off between them. La gestión de energía en los sistema móviles está considerada hoy en día como un reto fundamental, notándose, especialmente, en aquellos terminales que utilizando un sistema operativo implementan múltiples funciones. Es común en los sistemas móviles actuales ejecutar simultaneamente diferentes aplicaciones y tener, para una de ellas, un objetivo de tiempo de uso de la batería. Tradicionalmente, las políticas de gestión de consumo de potencia de los sistemas operativos hacen lo que está en sus manos para ahorrar energía y satisfacer sus requisitos de prestaciones, pero no son capaces de proporcionar un objetivo de tiempo de utilización del sistema, dejando al usuario la difícil tarea de buscar un compromiso entre prestaciones y tiempo de utilización del sistema. Esta tesis, como contribución, proporciona una nueva manera de afrontar el problema. En ella se establece que un esquema de gestión de consumo de energía debería, en primer lugar, garantizar, para una aplicación dada, un tiempo mínimo de utilización de la batería que estuviera especificado por el usuario, restringiendo la potencia media consumida por las aplicaciones que se puedan considerar menos importantes y, en segundo lugar, maximizar las prestaciones globales sin comprometer la garantía de utilización de la batería. Como soporte de lo anterior, la energía, en lugar del tiempo de CPU o el ancho de banda, debería gestionarse globalmente por el sistema operativo como recurso de primera clase. Como primera fase en el desarrollo completo de un esquema de gestión de consumo, esta tesis presenta un algoritmo de planificación de encolado equitativo (fair queueing) basado en el consumo de energía, es decir, una nueva clase de algoritmos de planificación que, en combinación con mecanismos que restrinjan la tasa de descarga de una batería, gestionen de forma sistemática la energía como recurso de primera clase, con el objetivo de garantizar, para una aplicación dada, un tiempo de uso de la batería, definido por el usuario, en sistemas móviles empotrados. El encolado equitativo de energía es una extensión al dominio de la energía del encolado equitativo tradicional. Esta clase de algoritmos asigna una reserva de potencia a cada tarea y gestiona la energía sirviéndola de manera proporcional a su reserva. Este uso proporcional de la energía garantiza que cada tarea reciba una porción de potencia y evita que haya tareas que se vean privadas de recibir energía por otras con un comportamiento más ambicioso. Esta clase de algoritmos trata a todas las tareas por igual y puede planificar tareas periódicas en tiempo real asignando a cada una de ellas una reserva de potencia que es adecuada para proporcionar la mayor de las cantidades de energía demandadas por período. Sin embargo, es posible demostrar que sólo se consigue cumplir con los requisitos impuestos por todos los plazos temporales con reservas de potencia extremadamente conservadoras. En esta tesis, para proporcionar un soporte más flexible y eficiente para diferentes tipos de tareas de tiempo real junto con el resto de tareas, se combina un mecanismo de planificación basado en prioridades con el encolado equitativo basado en energía. En esta clase de algoritmos, gracias al método introducido, que controla el tiempo que se ejecuta con prioridad una tarea de tiempo real, se puede establecer un compromiso entre el cumplimiento de los requisitos de tiempo real y el consumo de potencia. Para evaluar los algoritmos, se ha diseñado en SystemC un banco de pruebas. Los resultados muestran que el algoritmo de encolado equitativo basado en el consumo de energía consigue el balance entre el uso proporcional a la energía reservada y el cumplimiento de los requisitos de tiempo real.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the sputter growth of very thin aluminum nitride (AlN) films on iridium electrodes for electroacoustic devices operating in the super high frequency range. Superior crystal quality and low stress films with thicknesses as low as 160 nm are achieved after a radio frequency plasma treatment of the iridium electrode followed by a two-step alternating current reactive magnetron sputtering of an aluminum target, which promotes better conditions for the nucleation of well textured AlN films in the very first stages of growth. Solidly mounted resonators tuned around 8 GHz with effective electromechanical coupling factors of 5.8% and quality factors Q up to 900 are achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study experimentally the dynamic properties of a fully integrated high power master-oscillator power-amplifier emitting at 1.5 μm under continuous wave and gain-switching conditions. High peak power (2.7 W) optical pulses with short duration (~ 110 ps) have been generated by gain switching the master-oscillator. We show the existence of working points at very close driving conditions with stable or unstable regimes caused by the compound cavity effects. The optical and radio-frequency spectra of stable and unstable operating points are analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, implementation and testing of non- commercial GaN HEMT in a simple buck converter for envelope amplifier in ET and EER transmission techn iques has been done. Comparing to the prototypes with commercially available EPC1014 and 1015 GaN HEMTs, experimentally demonstrated power supply provided better thermal management and increased the switching frequency up to 25MHz. 64QAM signal with 1MHz of large signal bandw idth and 10.5dB of Peak to Average Power Ratio was gener ated, using the switching frequency of 20MHz. The obtaine defficiency was 38% including the driving circuit an d the total losses breakdown showed that switching power losses in the HEMT are the dominant ones. In addition to this, some basic physical modeling has been done, in order to provide an insight on the correlation between the electrical characteristics of the GaN HEMT and physical design parameters. This is the first step in the optimization of the HEMT design for this particular application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eye-safety requirements in important applications like LIDAR or Free Space Optical Communications make specifically interesting the generation of high power, short optical pulses at 1.5 um. Moreover, high repetition rates allow reducing the error and/or the measurement time in applications involving pulsed time-of-flight measurements, as range finders, 3D scanners or traffic velocity controls. The Master Oscillator Power Amplifier (MOPA) architecture is an interesting source for these applications since large changes in output power can be obtained at GHz rates with a relatively small modulation of the current in the Master Oscillator (MO). We have recently demonstrated short optical pulses (100 ps) with high peak power (2.7 W) by gain switching the MO of a monolithically integrated 1.5 um MOPA. Although in an integrated MOPA the laser and the amplifier are ideally independent devices, compound cavity effects due to the residual reflectance at the different interfaces are often observed, leading to modal instabilities such as self-pulsations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy storage at low maintenance cost is one of the key challenges for generating electricity from the solar energy. This paper presents the theoretical analysis (verified by CFD) of the night time performance of a recently proposed conceptual system that integrates thermal storage (via phase change materials) and thermophotovoltaics for power generation. These storage integrated solar thermophotovoltaic (SISTPV) systems are attractive owing to their simple design (no moving parts) and modularity compared to conventional Concentrated Solar Power (CSP) technologies. Importantly, the ability of high temperature operation of these systems allows the use of silicon (melting point of 1680 K) as the phase change material (PCM). Silicon's very high latent heat of fusion of 1800 kJ/kg and low cost ($1.70/kg), makes it an ideal heat storage medium enabling for an extremely high storage energy density and low weight modular systems. In this paper, the night time operation of the SISTPV system optimised for steady state is analysed. The results indicate that for any given PCM length, a combination of small taper ratio and large inlet hole-to-absorber area ratio are essential to increase the operation time and the average power produced during the night time. Additionally, the overall results show that there is a trade-off between running time and the average power produced during the night time. Average night time power densities as high as 30 W/cm(2) are possible if the system is designed with a small PCM length (10 cm) to operate just a few hours after sun-set, but running times longer than 72 h (3 days) are possible for larger lengths (50 cm) at the expense of a lower average power density of about 14 W/cm(2). In both cases the steady state system efficiency has been predicted to be about 30%. This makes SISTPV systems to be a versatile solution that can be adapted for operation in a broad range of locations with different climate conditions, even being used off-grid and in space applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a compact lightweight impulse radar for radio-echo sounding of subsurface structures designed specifically for glaciological applications. The radar operates at frequencies between 10 and 75 MHz. Its main advantages are that it has a high signal-to-noise ratio and a corresponding wide dynamic range of 132 dB due mainly to its ability to perform real-time stacking (up to 4096 traces) as well as to the high transmitted power (peak voltage 2800 V). The maximum recording time window, 40 ?s at 100 MHz sampling frequency, results in possible radar returns from as deep as 3300 m. It is a versatile radar, suitable for different geophysical measurements (common-offset profiling, common midpoint, transillumination, etc.) and for different profiling set-ups, such as a snowmobile and sledge convoy or carried in a backpack and operated by a single person. Its low power consumption (6.6 W for the transmitter and 7.5 W for the receiver) allows the system to operate under battery power for mayor que7 hours with a total weight of menor que9 kg for all equipment, antennas and batteries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work describes the structural and piezoelectric assessment of aluminum nitride (AlN) thin films deposited by pulsed-DC reactive sputtering on insulating substrates. We investigate the effect of different insulating seed layers on AlN properties (crystallinity, residual stress and piezoelectric activity). The seed layers investigated, silicon nitride (Si3N4), silicon dioxide (SiO2), amorphous tantalum oxide (Ta2O5), and amorphous or nano-crystalline titanium oxide (TiO2) are deposited on glass plates to a thickness lower than 100 nm. Before AlN films deposition, their surface is pre-treated with a soft ionic cleaning, either with argon or nitrogen ions. Only AlN films grown of TiO2 seed layers exhibit a significant piezoelectric activity to be used in acoustic device applications. Pure c-axis oriented films, with FWHM of rocking curve of 6º, stress below 500 MPa, and electromechanical coupling factors measured in SAW devices of 1.25% are obtained. The best AlN films are achieved on amorphous TiO2 seed layers deposited at high target power and low sputtering pressure. On the other hand, AlN films deposited on Si3N4, SiO2 and TaOx exhibit a mixed orientation, high stress and very low piezoelectric activity, which invalidate their use in acoustic devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objetivos : Analizar la distribución de energía en un tejido cuando se emplea terapia por láser de baja potencia y estudiar las especificaciones mínimas de equipos de terapia láser para estimar la dosis. Material y métodos: Se ha empleado el método de Monte Carlo para obtener la distribución de energía absorbida por la piel para dos tipos de láser y la teoría de la difusión para estimar la longitud de penetración y el recorrido libre medio. Se ha estudiado la variación de esa distribución en función de la raza (caucásico, asiático, afroamericano) y para dos localizaciones anatómicas distintas. Se ha analizado la información facilitada por diversos fabricantes de equipos comerciales para determinar si es necesario adaptar la dosimetría recomendada. Resultados: La radiación láser infrarroja (810nm) se absorbe mayoritariamente en un espesor de piel de 1,9±0,2mm para caucásicos, entre 1,73±0,08mm (volar del antebrazo) y 1,80±0,11mm (palma) para asiáticos y entre 1,25±0,09mm (volar del antebrazo) y 1,65±0,2mm (palma) para afroamericanos. El recorrido libre medio de la luz siempre es menor que 0,69±0,09mm. Para los equipos comerciales analizados la única característica geométrica del haz láser que se menciona es la superficie que oscila entre 0,08 y 1cm2, pero no se especifica cómo es la distribución de energía, la divergencia del haz, forma de la sección transversal, etc. Conclusiones:Dependiendo del equipo de terapia por láser de baja potencia utilizado, el tipo de paciente y la zona a tratar, el clínico debe adaptar las dosis recomendadas. Abstract: Objectives: To analyze the distribution of energy deposited in a tissue when this is irradiated with a low power laser and to study the minimum characteristics that manufacturers of low power laser therapy equipments should include to estimate the dosage. Material and methods: Monte Carlo simulation was performed to determine the absorption location of the laser energy. The diffusion theory was used to estimate penetration depth and mean free path. Variation of this distribution was studied based on three different skin types (Caucasians, Asians and Afroamericans) and for two different anatomic locations: palm and volar forearm. Information given by several manufactures of low power laser therapy equipments has been analyzed. Results: Infrared (810 nm) laser radiation is mainly absorbed in a skin layer of thickness 1.9±0.2mm for Caucasians, from 1.73±0.08mm (volar forearm) to 1.80±0.11mm (palm) for Asians, and from 1.25±0.09mm (volar forearm) to 1.65±0.2mm (palm) for Afroamericans. The light mean free path is lower than 0.69±0.09mm for all cases. The laser beam characteristics (beam shape, energy distribution on a transversal section, divergence, incidence angle,etc.) are not usually specified by the manufacturers. Only beam size (ranging from 0.08 to 1cm2) is given in some cases. Discussion and conclusions: Depending on the low power laser therapy equipment, on the patient and on the anatomic area to be treated, the staff should adapt the recommended dosage for each individual case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To perform Quantum Key Distribution, the mastering of the extremely weak signals carried by the quantum channel is required. Transporting these signals without disturbance is customarily done by isolating the quantum channel from any noise sources using a dedicated physical channel. However, to really profit from this technology, a full integration with conventional network technologies would be highly desirable. Trying to use single photon signals with others that carry an average power many orders of magnitude bigger while sharing as much infrastructure with a conventional network as possible brings obvious problems. The purpose of the present paper is to report our efforts in researching the limits of the integration of QKD in modern optical networks scenarios. We have built a full metropolitan area network testbed comprising a backbone and an access network. The emphasis is put in using as much as possible the same industrial grade technology that is actually used in already installed networks, in order to understand the throughput, limits and cost of deploying QKD in a real network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la búsqueda de dispositivos cada vez más eficientes, de larga duración y bajo coste de mantenimiento en el mundo de la iluminación, aparecen los LEDs. Estos pequeños dispositivos van poco a poco sustituyendo a las bombillas tradicionales de incandescencia, tomando un papel cada vez más importante entre las fuentes de iluminación. Las primeras funciones prácticas que tuvieron estos LEDs fueron como indicadores, y sus primeros usos fueron en pantallas de calculadoras, electrodomésticos, etcétera, y más adelante, con el desarrollo de nuevos materiales, se empezaron a utilizar como dispositivos de iluminación. Ha sido en estos últimos años cuando se ha producido un salto cuantitativo gracias a la aparición de los POWER LED (LEDs de potencia) o de alto brillo, que son los que han permitido ampliar el uso de estos dispositivos como fuentes de iluminación en, por ejemplo, hogares, alumbrado público, e incluso llegando a sustituir los faros halógenos de vehículos por iluminación LED en algunos modelos. Es por ello que mientras su potencia lumínica va aumentado, su rango de utilización también lo hace. Para caracterizar estas fuentes lumínicas y otras a las que se les pueden dar diferentes usos, se desarrolla este proyecto mediante el análisis de su espectro. Para ello, además, se hará un análisis del resto de instrumentación necesaria que forma parte del proyecto. Este análisis abarca el estudio del propio espectrómetro tanto a nivel de hardware como de software, que modificaremos según los intereses del proyecto. También se estudiará la fibra óptica y el driver para controlar los dispositivos LEDs de potencia, así como los propios LEDs. Para ello se medirán las características de estos LEDs y se compararán con las facilitadas por el fabricante. ABSTRACT. Searching for more efficient, long lasting an low-maintenance devices in lighting world, LEDs appear. These small devices are gradually replacing traditional incandescent bulbs. LEDs are taking an increasingly important role between the light sources. At the begining they were only used as indicators and their first use were in screens calculators, appliances, etc., and later, with the development of new materials, were progressively used as lighting devices. Nowadays a great development has happened in LED lighting with the apparition of the POWER LED or high bright. Power LEDs are allowed to extend the use of these devices as lighting sources for example for homes, street lighting, and even coming to replace halogen headlights LED lighting in vehicles of some models. That's the reason the more their lighting power increases the more their use increases too. The aim of this project is to characterize these light sources and others that can be given different uses by analyzing its spectrum. Moreover, necessary instruments will also be analysed. This study involves both hardware and software spectrometer analysis itself by modifying its software according to the interests of the project. Furthermore, optical fiber and the driver to control LED power devices will be studied by measuring LEDs characteristics and comparing with those provided by the manufacturer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A 3-year Project started on November 1 2010, financed by the European Commision within the FP-7 Space Program, and aimed at developing an efficient de-orbit system that could be carried on board by future spacecraft launched into LEO, will be presented. The operational system will deploy a thin uninsulated tape-tether to collect electrons as a giant Langmuir probe, using no propellant/no power supply, and generating power on board. This project will involve free-fall tests, and laboratory hypervelocity-impact and tether-current tests, and design/Manufacturing of subsystems: interface elements, electric control and driving module, electron-ejecting plasma contactor, tether-deployment mechanism/end-mass, and tape samples. Preliminary results to be presented involve: i) devising criteria for sizing the three disparate tape dimensions, affecting mass, resistance, current-collection, magnetic self-field, and survivability against debris itself; ii) assessing the dynamical relevance of tether parameters in implementing control laws to limit oscillations in /off the orbital plane, where passive stability may be marginal; iii) deriving a law for bare-tape current from numerical simulations and chamber tests, taking into account ambient magnetic field, ion ram motion, and adiabatic electron trapping; iv) determining requirements on a year-dormant hollow cathode under long times/broad emission-range operation, and trading-off against use of electron thermal emission; v) determining requirements on magnetic components and power semiconductors for a control module that faces high voltage/power operation under mass/volume limitations; vi) assessing strategies to passively deploy a wide conductive tape that needs no retrieval, while avoiding jamming and ending at minimum libration; vii) evaluating the tape structure as regards conductive and dielectric materials, both lengthwise and in its cross-section, in particular to prevent arcing in triple-point junctions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar dos de las cuatro fases propias del procesado espectral: reducción dimensional y extracción de endmembers. Cabe mencionar que este trabajo se complementa con el realizado por Raquel Lazcano en su Proyecto Fin de Grado, donde se desarrollan las funciones necesarias para completar las otras dos fases necesarias en la cadena de desmezclado. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Proyecto Fin de Grado y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como los medios y las plataformas que servirán para realizar la división en núcleos y detectar las distintas problemáticas con las que nos podamos encontrar al realizar dicha división. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para componer la cadena de desmezclado y generar la librería; un punto importante en este apartado es la utilización de librerías especializadas en operaciones matriciales complejas, implementadas en C++. Tras explicar el método utilizado, se exponen los resultados obtenidos primero por etapas y, posteriormente, con la cadena de procesado completa, implementada en uno o varios núcleos. Por último, se aportan una serie de conclusiones obtenidas tras analizar los distintos algoritmos en cuanto a bondad de resultados, tiempos de procesado y consumo de recursos y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement two of the four stages of the hyperspectral imaging processing chain: dimensionality reduction and endmember extraction. This research is complemented with the research conducted by Raquel Lazcano in her Diploma Project, where she studies the other two stages of the processing chain. The document is divided in several chapters. The first of them introduces the motivation of the Diploma Project and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images and the software and hardware that we will use to parallelize the system and to analyze its performance. Once we have exposed the theoretical bases, we will explain the followed methodology to compose the processing chain and to generate the library; one of the most important issues in this chapter is the use of some C++ libraries specialized in complex matrix operations. At this point, we will expose the results obtained in the individual stage analysis and then, the results of the full processing chain implemented in one or several cores. Finally, we will extract some conclusions related with algorithm behavior, time processing and system performance. In the same way, we propose some future research lines according to the results obtained in this document

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta Tesis aborda los problemas de eficiencia de las redes eléctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes eléctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reducción de la infraestructura necesaria para suplir las mismas necesidades energéticas. Además, esta Tesis se enfrenta a un nuevo paradigma energético, donde la presencia de generación distribuida está muy extendida en las redes eléctricas, en particular, la generación fotovoltaica (FV). Este tipo de fuente energética afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetración de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red eléctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energética. Por lo tanto, no sólo se mejora la eficiencia de la red eléctrica, sino que también puede ser aumentada la penetración de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos económicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energético o un aumento de eficiencia son llamadas Gestión de la Demanda Eléctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energético y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE sólo usa información local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red eléctrica. Aunque esta afirmación pueda diferir de la definición general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energéticos Distribuidos (REDs). En este caso, la GDE está enfocada en la maximización del uso de la energía local, reduciéndose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestión energética, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energético. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, éstas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red eléctrica cuando el algoritmo de GDE está enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE sólo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinación entre las instalaciones. A través de esta coordinación, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto información local como de la red eléctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red eléctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clásicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir órdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestión paralela en lugar de una jerárquica como en las redes eléctricas clásicas. Esto implica que se requiere un mecanismo de coordinación entre instalaciones. Esta Tesis pretende minimizar la cantidad de información necesaria para esta coordinación. Para lograr este objetivo, se han utilizado dos técnicas de coordinación colectiva: osciladores acoplados e inteligencia de enjambre. La combinación de estas técnicas para llevar a cabo la coordinación de un sistema con las características de la red eléctrica es en sí mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinación no es sólo una contribución en el campo de la gestión energética, sino también en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre máximos y mínimos de la red eléctrica en proporción a la cantidad de energía controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energía controlada por el algoritmo, mayor es la mejora de eficiencia en la red eléctrica. Además de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solución distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes características del algoritmo de GDE propuesto: • Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestión de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalación no afecta el funcionamiento global de la red. • Privacidad de datos: el uso de una topología distribuida causa de que no hay un nodo central con información sensible de todos los consumidores. Esta Tesis va más allá y el algoritmo propuesto de GDE no utiliza información específica acerca de los comportamientos de los consumidores, siendo la coordinación entre las instalaciones completamente anónimos. • Escalabilidad: el algoritmo propuesto de GDE opera con cualquier número de instalaciones. Esto implica que se permite la incorporación de nuevas instalaciones sin afectar a su funcionamiento. • Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topológicos. Además, todas las instalaciones calculan su propia gestión con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cómputo. • Rápido despliegue: las características de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementación rápida. No se requiere una planificación compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: • Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. • Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. • Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. • Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. • Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present the tests on a structure designed to be a gymnasium, which has natural frequencies within that range. In these tests the gym slab was instrumented with acceleration sensors and different people jumped on a force plate installed on the floor. The test results have been compared with predictions based on the two existing load modelling alternatives (Sim and SCI Guide) and two new methodologies for modelling jumping loads has been proposed. The results of the force plate trials were analysed in an attempt to better characterize the profile of the jump force and determine how best to approximate it. In the first proposed methodology the study is carried out in the frequency domain using an average power spectral density of the jumps. In the second proposed methodology, the jump force is decomposed into the summation of one peak with a large period and a number of peaks with smaller periods. Utilizing a similar model to that of the Sim model, the approximation will still be comprised of the summation of two quadratic cosine functions.