11 resultados para RES-based facilities

em Universidad Politécnica de Madrid


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Besides space laboratories for in-orbit experimentation, Earth based facilities for laboratory experimentation are of paramount importance for the enhancement on liquid bridge knowledge. In spite of the constraints imposed by simulated microgravity (which force to work either with very small size liquid bridges or by using the Plateau tank technique, amongst other techniques), the availability and accessibility of Earth facilities can circumvent in many cases the drawbacks associated with simulated microgravity conditions. To support theoretical and in orbit experimental studies on liquid bridges under reduced gravity conditions, several ground facilities were developed at IDR. In the following these ground facilities are briefly described, and main results obtained by using them are cited.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis presents a task-oriented approach to telemanipulation for maintenance in large scientific facilities, with specific focus on the particle accelerator facilities at European Organization for Nuclear Research (CERN) in Geneva, Switzerland and GSI Helmholtz Centre for Heavy Ion Research (GSI) in Darmstadt, Germany. It examines how telemanipulation can be used in these facilities and reviews how this differs from the representation of telemanipulation tasks within the literature. It provides methods to assess and compare telemanipulation procedures as well a test suite to compare telemanipulators themselves from a dexterity perspective. It presents a formalisation of telemanipulation procedures into a hierarchical model which can be then used as a basis to aid maintenance engineers in assessing tasks for telemanipulation, and as the basis for future research. The model introduces a new concept of Elemental Actions as the building block of telemanipulation movements and incorporates the dependent factors for procedures at a higher level of abstraction. In order to gain insight into realistic tasks performed by telemanipulation systems within both industrial and research environments a survey of teleoperation experts is presented. Analysis of the responses is performed from which it is concluded that there is a need within the robotics community for physical benchmarking tests which are geared towards evaluating the dexterity of telemanipulators for comparison of their dexterous abilities. A three stage test suite is presented which is designed to allow maintenance engineers to assess different telemanipulators for their dexterity. This incorporates general characteristics of the system, a method to compare kinematic reachability of multiple telemanipulators and physical test setups to assess dexterity from a both a qualitative perspective and measurably by using performance metrics. Finally, experimental results are provided for the application of the proposed test suite onto two telemanipulation systems, one from a research setting and the other within CERN. It describes the procedure performed and discusses comparisons between the two systems, as well as providing input from the expert operator of the CERN system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The European HiPER project aims to demonstrate commercial viability of inertial fusion energy within the following two decades. This goal requires an extensive Research &Development program on materials for different applications (e.g., first wall, structural components and final optics). In this paper we will discuss our activities in the framework of HiPER to develop materials studies for the different areas of interest. The chamber first wall will have to withstand explosions of at least 100 MJ at a repetition rate of 5-10 Hz. If direct drive targets are used, a dry wall chamber operated in vacuum is preferable. In this situation the major threat for the wall stems from ions. For reasonably low chamber radius (5-10 m) new materials based on W and C are being investigated, e.g., engineered surfaces and nanostructured materials. Structural materials will be subject to high fluxes of neutrons leading to deleterious effects, such as, swelling. Low activation advanced steels as well as new nanostructured materials are being investigated. The final optics lenses will not survive the extreme ion irradiation pulses originated in the explosions. Therefore, mitigation strategies are being investigated. In addition, efforts are being carried out in understanding optimized conditions to minimize the loss of optical properties by neutron and gamma irradiation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La Internet de las Cosas (IoT), como parte de la Futura Internet, se ha convertido en la actualidad en uno de los principales temas de investigación; en parte gracias a la atención que la sociedad está poniendo en el desarrollo de determinado tipo de servicios (telemetría, generación inteligente de energía, telesanidad, etc.) y por las recientes previsiones económicas que sitúan a algunos actores, como los operadores de telecomunicaciones (que se encuentran desesperadamente buscando nuevas oportunidades), al frente empujando algunas tecnologías interrelacionadas como las comunicaciones Máquina a Máquina (M2M). En este contexto, un importante número de actividades de investigación a nivel mundial se están realizando en distintas facetas: comunicaciones de redes de sensores, procesado de información, almacenamiento de grandes cantidades de datos (big--‐data), semántica, arquitecturas de servicio, etc. Todas ellas, de forma independiente, están llegando a un nivel de madurez que permiten vislumbrar la realización de la Internet de las Cosas más que como un sueño, como una realidad tangible. Sin embargo, los servicios anteriormente mencionados no pueden esperar a desarrollarse hasta que las actividades de investigación obtengan soluciones holísticas completas. Es importante proporcionar resultados intermedios que eviten soluciones verticales realizadas para desarrollos particulares. En este trabajo, nos hemos focalizado en la creación de una plataforma de servicios que pretende facilitar, por una parte la integración de redes de sensores y actuadores heterogéneas y geográficamente distribuidas, y por otra lado el desarrollo de servicios horizontales utilizando dichas redes y la información que proporcionan. Este habilitador se utilizará para el desarrollo de servicios y para la experimentación en la Internet de las Cosas. Previo a la definición de la plataforma, se ha realizado un importante estudio focalizando no sólo trabajos y proyectos de investigación, sino también actividades de estandarización. Los resultados se pueden resumir en las siguientes aseveraciones: a) Los modelos de datos definidos por el grupo “Sensor Web Enablement” (SWE™) del “Open Geospatial Consortium (OGC®)” representan hoy en día la solución más completa para describir las redes de sensores y actuadores así como las observaciones. b) Las interfaces OGC, a pesar de las limitaciones que requieren cambios y extensiones, podrían ser utilizadas como las bases para acceder a sensores y datos. c) Las redes de nueva generación (NGN) ofrecen un buen sustrato que facilita la integración de redes de sensores y el desarrollo de servicios. En consecuencia, una nueva plataforma de Servicios, llamada Ubiquitous Sensor Networks (USN), se ha definido en esta Tesis tratando de contribuir a rellenar los huecos previamente mencionados. Los puntos más destacados de la plataforma USN son: a) Desde un punto de vista arquitectónico, sigue una aproximación de dos niveles (Habilitador y Gateway) similar a otros habilitadores que utilizan las NGN (como el OMA Presence). b) Los modelos de datos están basado en los estándares del OGC SWE. iv c) Está integrado en las NGN pero puede ser utilizado sin ellas utilizando infraestructuras IP abiertas. d) Las principales funciones son: Descubrimiento de sensores, Almacenamiento de observaciones, Publicacion--‐subscripcion--‐notificación, ejecución remota homogénea, seguridad, gestión de diccionarios de datos, facilidades de monitorización, utilidades de conversión de protocolos, interacciones síncronas y asíncronas, soporte para el “streaming” y arbitrado básico de recursos. Para demostrar las funcionalidades que la Plataforma USN propuesta pueden ofrecer a los futuros escenarios de la Internet de las Cosas, se presentan resultados experimentales de tres pruebas de concepto (telemetría, “Smart Places” y monitorización medioambiental) reales a pequeña escala y un estudio sobre semántica (sistema de información vehicular). Además, se está utilizando actualmente como Habilitador para desarrollar tanto experimentación como servicios reales en el proyecto Europeo SmartSantander (que aspira a integrar alrededor de 20.000 dispositivos IoT). v Abstract Internet of Things, as part of the Future Internet, has become one of the main research topics nowadays; in part thanks to the pressure the society is putting on the development of a particular kind of services (Smart metering, Smart Grids, eHealth, etc.), and by the recent business forecasts that situate some players, like Telecom Operators (which are desperately seeking for new opportunities), at the forefront pushing for some interrelated technologies like Machine--‐to--‐Machine (M2M) communications. Under this context, an important number of research activities are currently taking place worldwide at different levels: sensor network communications, information processing, big--‐ data storage, semantics, service level architectures, etc. All of them, isolated, are arriving to a level of maturity that envision the achievement of Internet of Things (IoT) more than a dream, a tangible goal. However, the aforementioned services cannot wait to be developed until the holistic research actions bring complete solutions. It is important to come out with intermediate results that avoid vertical solutions tailored for particular deployments. In the present work, we focus on the creation of a Service--‐level platform intended to facilitate, from one side the integration of heterogeneous and geographically disperse Sensors and Actuator Networks (SANs), and from the other the development of horizontal services using them and the information they provide. This enabler will be used for horizontal service development and for IoT experimentation. Prior to the definition of the platform, we have realized an important study targeting not just research works and projects, but also standardization topics. The results can be summarized in the following assertions: a) Open Geospatial Consortium (OGC®) Sensor Web Enablement (SWE™) data models today represent the most complete solution to describe SANs and observations. b) OGC interfaces, despite the limitations that require changes and extensions, could be used as the bases for accessing sensors and data. c) Next Generation Networks (NGN) offer a good substrate that facilitates the integration of SANs and the development of services. Consequently a new Service Layer platform, called Ubiquitous Sensor Networks (USN), has been defined in this Thesis trying to contribute to fill in the previous gaps. The main highlights of the proposed USN Platform are: a) From an architectural point of view, it follows a two--‐layer approach (Enabler and Gateway) similar to other enablers that run on top of NGN (like the OMA Presence). b) Data models and interfaces are based on the OGC SWE standards. c) It is integrated in NGN but it can be used without it over open IP infrastructures. d) Main functions are: Sensor Discovery, Observation Storage, Publish--‐Subscribe--‐Notify, homogeneous remote execution, security, data dictionaries handling, monitoring facilities, authorization support, protocol conversion utilities, synchronous and asynchronous interactions, streaming support and basic resource arbitration. vi In order to demonstrate the functionalities that the proposed USN Platform can offer to future IoT scenarios, some experimental results have been addressed in three real--‐life small--‐scale proofs--‐of concepts (Smart Metering, Smart Places and Environmental monitoring) and a study for semantics (in--‐vehicle information system). Furthermore we also present the current use of the proposed USN Platform as an Enabler to develop experimentation and real services in the SmartSantander EU project (that aims at integrating around 20.000 IoT devices).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many industries that use highly technological solutions to improve quality in all of their products. The steel industry is one example. Several automatic surface-inspection systems are used in the steel industry to identify various types of defects and to help operators decide whether to accept, reroute, or downgrade the material, subject to the assessment process. This paper focuses on promoting a strategy that considers all defects in an integrated fashion. It does this by managing the uncertainty about the exact position of a defect due to different process conditions by means of Gaussian additive influence functions. The relevance of the approach is in making possible consistency and reliability between surface inspection systems. The results obtained are an increase in confidence in the automatic inspection system and an ability to introduce improved prediction and advanced routing models. The prediction is provided to technical operators to help them in their decision-making process. It shows the increase in improvement gained by reducing the 40 % of coils that are downgraded at the hot strip mill because of specific defects. In addition, this technology facilitates an increase of 50 % in the accuracy of the estimate of defect survival after the cleaning facility in comparison to the former approach. The proposed technology is implemented by means of software-based, multi-agent solutions. It makes possible the independent treatment of information, presentation, quality analysis, and other relevant functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Resulta difícil definir una profesión que surge por la necesidad de adaptar los espacios de trabajo a las nuevas tendencias de las organizaciones, a la productividad, a las nuevas tecnologías que continúan modificando y facilitando desde las últimas décadas el modo y forma de trabajar. Mucho más complicado resulta definir una profesión casi invisible. Cuando todo funciona en un edificio, en un inmueble, en un activo. Todo está correcto. He ahí la dificultad de su definición. Lo que no se ve, no se valora. Las reuniones, las visitas, un puesto de trabajo, una sala de trabajo, una zona de descanso. La climatización, la protección contra incendios, la legionela, el suministro eléctrico, una evacuación. La organización, sus necesidades, su filosofía. Los informes, los análisis, las mejoras. Las personas, el espacio, los procesos, la tecnología. En la actualidad, todo se asocia a su coste. A su rentabilidad. En la difícil tarea de realizar el proyecto de un edificio, participan multitud de aspectos que deben estar perfectamente organizados. El arquitecto proyecta y aúna en el proyecto: pasado (experiencia), presente (tendencias) y futuro (perdurabilidad). Y es en ese momento, cuando al considerar el futuro del edificio, su perdurabilidad, hace que su ciclo de vida sea criterio fundamental al proyectar. Que deba considerarse desde el primer esbozo del proyecto. Para que un edificio perdure en el tiempo existen gran número de factores condicionantes. Empezando por su uso apropiado, su nivel de actividad, pasando por las distintas propiedades que pueda tener, y terminando por los responsables de su mantenimiento en su día a día. Esa profesión invisible, es la disciplina conocida como Facility Management. Otra disciplina no tan novedosa –sus inicios fueron a finales del siglo XIX-, y que en la actualidad se empieza a valorar en gran medida es la Responsabilidad Social. Todo lo que de forma voluntaria, una organización realiza por encima de lo estrictamente legal con objeto de contribuir al desarrollo sostenible (económico, social y medio ambiental). Ambas disciplinas destacan por su continuo dinamismo. Reflejando la evolución de distintas inquietudes: • Personas, procesos, espacios, tecnología • Económica, social, medio-ambiental Y que sólo puede gestionarse con una correcta gestión del cambio. Elemento bisagra entre ambas disciplinas. El presente trabajo de investigación se ha basado en el estudio del grado de sensibilización que existe para con la Responsabilidad Social dentro del sector de la Facility Management en España. Para ello, se han estructurado varios ejercicios con objeto de analizar: la comunicación, el marco actual normativo, la opinión del profesional, del facilities manager. Como objetivo, conocer la implicación actual que la Responsabilidad Social ejerce en el ejercicio de la profesión del Facilities Manager. Se hace especial hincapié en la voluntariedad de ambas disciplinas. De ahí que el presente estudio de investigación realice dicho trabajo sobre elementos voluntarios y por tanto sobre el valor añadido que se obtiene al gestionar dichas disciplinas de forma conjunta y voluntaria. Para que una organización pueda desarrollar su actividad principal –su negocio-, el Facilities Manager gestiona el segundo coste que esta organización tiene. Llegando a poder ser el primero si se incluye el coste asociado al personal (nóminas, beneficios, etc.) Entre el (70 – 80)% del coste de un edificio a lo largo de toda su vida útil, se encuentra en su periodo de explotación. En la perdurabilidad. La tecnología facilita la gestión, pero quien gestiona y lleva a cabo esta perdurabilidad son las personas en los distintos niveles de gestión: estratégico, táctico y operacional. En estos momentos de constante competencia, donde la innovación es el uniforme de batalla, el valor añadido del Facilities Manager se construye gestionando el patrimonio inmobiliario con criterios responsables. Su hecho diferenciador: su marca, su reputación. ABSTRACT It comes difficult to define a profession that emerges due to the need of adapting working spaces to new organization’s trends, productivity improvements and new technologies, which have kept changing and making easier the way that we work during the last decades. Defining an invisible profession results much more complicated than that, because everything is fine when everything works in a building, or in an asset, properly. Hence, there is the difficulty of its definition. What it is not seen, it is not worth. Meeting rooms, reception spaces, work spaces, recreational rooms. HVAC, fire protection, power supply, legionnaire’s disease, evacuation. The organization itself, its needs and its philosophy. Reporting, analysis, improvements. People, spaces, process, technology. Today everything is associated to cost and profitability. In the hard task of developing a building project, a lot of issues, that participate, must be perfectly organized. Architects design and gather/put together in the project: the past (experience), the present (trends) and the future (durability). In that moment, considering the future of the building, e. g. its perdurability, Life Cycle turn as the key point of the design. This issue makes LCC a good idea to have into account since the very first draft of the project. A great number of conditioner factors exist in order to the building resist through time. Starting from a suitable use and the level of activity, passing through different characteristics it may have, and ending daily maintenance responsible. That invisible profession, that discipline, is known as Facility Management. Another discipline, not as new as FM –it begun at the end of XIX century- that is becoming more and more valuable is Social Responsibility. It involves everything a company realizes in a voluntary way, above legal regulations contributing sustainable development (financial, social and environmentally). Both disciplines stand out by their continuous dynamism. Reflecting the evolution of different concerning: • People, process, spaces, technology • Financial, social and environmentally It can only be managed from the right change management. This is the linking point between both disciplines. This research work is based on the study of existing level of increasing sensitivity about Social Responsibility within Facility Management’s sector in Spain. In order to do that, several –five- exercises have been studied with the purpose of analyze: communication, law, professional and facility manager’s opinions. The objective is to know the current implication that Social Responsibility has over Facility Management. It is very important the voluntary part of both disciplines, that’s why the present research work is focused over the voluntary elements and about the added value that is obtained managing the before named disciplines as a whole and in voluntary way. In order a company can develop his core business/primary activities, facility managers must operate the second largest company budget/cost centre. Being the first centre cost if we considerer human resources’ costs included (salaries, incentives…) Among 70-80% building costs are produced along its operative life. Durability Technology ease management, but people are who manage and carry out this durability, within different levels: strategic, tactic and operational. In a world of continuing competence, where innovation is the uniform for the battle, facility manager’s added value is provided managing company’s real estate with responsibility criteria. Their distinguishing element: their brand, their reputation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an Ontology-Based multi-technology platform designed to avoid some issues of Building Automation Systems. The platform allows the integration of several building automation protocols, eases the development and implementation of different kinds of services and allows sharing information related to the infrastructure and facilities within a building. The system has been implemented and tested in the Energy Efficiency Research Facility at CeDInt-UPM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the radio signal transmission characteristics in the environment where the telerobotic application is sought is a key part of achieving a reliable wireless communication link between a telerobot and a control station. In this paper, wireless communication requirements and a case study of a typical telerobotic application in an underground facility at CERN are presented. Then, the theoretical and experimental characteristics of radio propagation are investigated with respect to time, distance, location and surrounding objects. Based on analysis of the experimental findings, we show how a commercial wireless system, such as Wi-Fi, can be made suitable for a case study application at CERN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta Tesis aborda los problemas de eficiencia de las redes eléctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes eléctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reducción de la infraestructura necesaria para suplir las mismas necesidades energéticas. Además, esta Tesis se enfrenta a un nuevo paradigma energético, donde la presencia de generación distribuida está muy extendida en las redes eléctricas, en particular, la generación fotovoltaica (FV). Este tipo de fuente energética afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetración de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red eléctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energética. Por lo tanto, no sólo se mejora la eficiencia de la red eléctrica, sino que también puede ser aumentada la penetración de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos económicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energético o un aumento de eficiencia son llamadas Gestión de la Demanda Eléctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energético y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE sólo usa información local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red eléctrica. Aunque esta afirmación pueda diferir de la definición general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energéticos Distribuidos (REDs). En este caso, la GDE está enfocada en la maximización del uso de la energía local, reduciéndose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestión energética, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energético. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, éstas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red eléctrica cuando el algoritmo de GDE está enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE sólo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinación entre las instalaciones. A través de esta coordinación, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto información local como de la red eléctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red eléctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clásicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir órdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestión paralela en lugar de una jerárquica como en las redes eléctricas clásicas. Esto implica que se requiere un mecanismo de coordinación entre instalaciones. Esta Tesis pretende minimizar la cantidad de información necesaria para esta coordinación. Para lograr este objetivo, se han utilizado dos técnicas de coordinación colectiva: osciladores acoplados e inteligencia de enjambre. La combinación de estas técnicas para llevar a cabo la coordinación de un sistema con las características de la red eléctrica es en sí mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinación no es sólo una contribución en el campo de la gestión energética, sino también en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre máximos y mínimos de la red eléctrica en proporción a la cantidad de energía controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energía controlada por el algoritmo, mayor es la mejora de eficiencia en la red eléctrica. Además de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solución distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes características del algoritmo de GDE propuesto: • Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestión de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalación no afecta el funcionamiento global de la red. • Privacidad de datos: el uso de una topología distribuida causa de que no hay un nodo central con información sensible de todos los consumidores. Esta Tesis va más allá y el algoritmo propuesto de GDE no utiliza información específica acerca de los comportamientos de los consumidores, siendo la coordinación entre las instalaciones completamente anónimos. • Escalabilidad: el algoritmo propuesto de GDE opera con cualquier número de instalaciones. Esto implica que se permite la incorporación de nuevas instalaciones sin afectar a su funcionamiento. • Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topológicos. Además, todas las instalaciones calculan su propia gestión con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cómputo. • Rápido despliegue: las características de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementación rápida. No se requiere una planificación compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: • Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. • Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. • Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. • Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. • Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El principio de Teoría de Juegos permite desarrollar modelos estocásticos de patrullaje multi-robot para proteger infraestructuras criticas. La protección de infraestructuras criticas representa un gran reto para los países al rededor del mundo, principalmente después de los ataques terroristas llevados a cabo la década pasada. En este documento el termino infraestructura hace referencia a aeropuertos, plantas nucleares u otros instalaciones. El problema de patrullaje se define como la actividad de patrullar un entorno determinado para monitorear cualquier actividad o sensar algunas variables ambientales. En esta actividad, un grupo de robots debe visitar un conjunto de puntos de interés definidos en un entorno en intervalos de tiempo irregulares con propósitos de seguridad. Los modelos de partullaje multi-robot son utilizados para resolver este problema. Hasta el momento existen trabajos que resuelven este problema utilizando diversos principios matemáticos. Los modelos de patrullaje multi-robot desarrollados en esos trabajos representan un gran avance en este campo de investigación. Sin embargo, los modelos con los mejores resultados no son viables para aplicaciones de seguridad debido a su naturaleza centralizada y determinista. Esta tesis presenta cinco modelos de patrullaje multi-robot distribuidos e impredecibles basados en modelos matemáticos de aprendizaje de Teoría de Juegos. El objetivo del desarrollo de estos modelos está en resolver los inconvenientes presentes en trabajos preliminares. Con esta finalidad, el problema de patrullaje multi-robot se formuló utilizando conceptos de Teoría de Grafos, en la cual se definieron varios juegos en cada vértice de un grafo. Los modelos de patrullaje multi-robot desarrollados en este trabajo de investigación se han validado y comparado con los mejores modelos disponibles en la literatura. Para llevar a cabo tanto la validación como la comparación se ha utilizado un simulador de patrullaje y un grupo de robots reales. Los resultados experimentales muestran que los modelos de patrullaje desarrollados en este trabajo de investigación trabajan mejor que modelos de trabajos previos en el 80% de 150 casos de estudio. Además de esto, estos modelos cuentan con varias características importantes tales como distribución, robustez, escalabilidad y dinamismo. Los avances logrados con este trabajo de investigación dan evidencia del potencial de Teoría de Juegos para desarrollar modelos de patrullaje útiles para proteger infraestructuras. ABSTRACT Game theory principle allows to developing stochastic multi-robot patrolling models to protect critical infrastructures. Critical infrastructures protection is a great concern for countries around the world, mainly due to terrorist attacks in the last decade. In this document, the term infrastructures includes airports, nuclear power plants, and many other facilities. The patrolling problem is defined as the activity of traversing a given environment to monitoring any activity or sensing some environmental variables If this activity were performed by a fleet of robots, they would have to visit some places of interest of an environment at irregular intervals of time for security purposes. This problem is solved using multi-robot patrolling models. To date, literature works have been solved this problem applying various mathematical principles.The multi-robot patrolling models developed in those works represent great advances in this field. However, the models that obtain the best results are unfeasible for security applications due to their centralized and predictable nature. This thesis presents five distributed and unpredictable multi-robot patrolling models based on mathematical learning models derived from Game Theory. These multi-robot patrolling models aim at overcoming the disadvantages of previous work. To this end, the multi-robot patrolling problem was formulated using concepts of Graph Theory to represent the environment. Several normal-form games were defined at each vertex of a graph in this formulation. The multi-robot patrolling models developed in this research work have been validated and compared with best ranked multi-robot patrolling models in the literature. Both validation and comparison were preformed by using both a patrolling simulator and real robots. Experimental results show that the multirobot patrolling models developed in this research work improve previous ones in as many as 80% of 150 cases of study. Moreover, these multi-robot patrolling models rely on several features to highlight in security applications such as distribution, robustness, scalability, and dynamism. The achievements obtained in this research work validate the potential of Game Theory to develop patrolling models to protect infrastructures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As advanced Cloud services are becoming mainstream, the contribution of data centers in the overall power consumption of modern cities is growing dramatically. The average consumption of a single data center is equivalent to the energy consumption of 25.000 households. Modeling the power consumption for these infrastructures is crucial to anticipate the effects of aggressive optimization policies, but accurate and fast power modeling is a complex challenge for high-end servers not yet satisfied by analytical approaches. This work proposes an automatic method, based on Multi-Objective Particle Swarm Optimization, for the identification of power models of enterprise servers in Cloud data centers. Our approach, as opposed to previous procedures, does not only consider the workload consolidation for deriving the power model, but also incorporates other non traditional factors like the static power consumption and its dependence with temperature. Our experimental results shows that we reach slightly better models than classical approaches, but simul- taneously simplifying the power model structure and thus the numbers of sensors needed, which is very promising for a short-term energy prediction. This work, validated with real Cloud applications, broadens the possibilities to derive efficient energy saving techniques for Cloud facilities.