11 resultados para Dynamic task allocation

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose the distributed bees algorithm (DBA) for task allocation in a swarm of robots. In the proposed scenario, task allocation consists in assigning the robots to the found targets in a 2-D arena. The expected distribution is obtained from the targets' qualities that are represented as scalar values. Decision-making mechanism is distributed and robots autonomously choose their assignments taking into account targets' qualities and distances. We tested the scalability of the proposed DBA algorithm in terms of number of robots and number of targets. For that, the experiments were performed in the simulator for various sets of parameters, including number of robots, number of targets, and targets' utilities. Control parameters inherent to DBA were tuned to test how they affect the final robot distribution. The simulation results show that by increasing the robot swarm size, the distribution error decreased.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we study a robot swarm that has to perform task allocation in an environment that features periodic properties. In this environment, tasks appear in different areas following periodic temporal patterns. The swarm has to reallocate its workforce periodically, performing a temporal task allocation that must be synchronized with the environment to be effective. We tackle temporal task allocation using methods and concepts that we borrow from the signal processing literature. In particular, we propose a distributed temporal task allocation algorithm that synchronizes robots of the swarm with the environment and with each other. In this algorithm, robots use only local information and a simple visual communication protocol based on light blinking. Our results show that a robot swarm that uses the proposed temporal task allocation algorithm performs considerably more tasks than a swarm that uses a greedy algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic pro­gramming (and more recently, constraint programming) resulting in quite capable paralle­lizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. In the past decade there has been significant progress in the development of parallelizing compilers for logic programming and, more recently, constraint programming. The typical applications of these paradigms frequently involve irregular computations, which arguably makes the techniques used in these compilers potentially interesting. In this paper we introduce in a tutorial way some of the problems faced by parallelizing compilers for logic and constraint programs. These include the need for inter-procedural pointer aliasing analysis for independence detection and having to manage speculative and irregular computations through task granularity control and dynamic task allocation. We also provide pointers to some of the progress made in these áreas. In the associated talk we demónstrate representatives of several generations of these parallelizing compilers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Energy consumption in data centers is nowadays a critical objective because of its dramatic environmental and economic impact. Over the last years, several approaches have been proposed to tackle the energy/cost optimization problem, but most of them have failed on providing an analytical model to target both the static and dynamic optimization domains for complex heterogeneous data centers. This paper proposes and solves an optimization problem for the energy-driven configuration of a heterogeneous data center. It also advances in the proposition of a new mechanism for task allocation and distribution of workload. The combination of both approaches outperforms previous published results in the field of energy minimization in heterogeneous data centers and scopes a promising area of research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper focuses on the general problem of coordinating of multi-robot systems, more specifically, it addresses the self-election of heterogeneous and specialized tasks by autonomous robots. In this regard, it has proposed experimenting with two different techniques based chiefly on selforganization and emergence biologically inspired, by applying response threshold models as well as ant colony optimization. Under this approach it can speak of multi-tasks selection instead of multi-tasks allocation, that means, as the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. It has evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-selection of heterogeneous specialized tasks by autonomous robots. In this paper we focus on a specifically distributed or decentralized approach as we are particularly interested in a decentralized solution where the robots themselves autonomously and in an individual manner, are responsible for selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario to solve the corresponding multi-task distribution problem and we propose a solution using two different approaches by applying Response Threshold Models as well as Learning Automata-based probabilistic algorithms. We have evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this paper is the development and application of multivariate time series models for forecasting aggregated wind power production in a country or region. Nowadays, in Spain, Denmark or Germany there is an increasing penetration of this kind of renewable energy, somehow to reduce energy dependence on the exterior, but always linked with the increaseand uncertainty affecting the prices of fossil fuels. The disposal of accurate predictions of wind power generation is a crucial task both for the System Operator as well as for all the agents of the Market. However, the vast majority of works rarely onsider forecasting horizons longer than 48 hours, although they are of interest for the system planning and operation. In this paper we use Dynamic Factor Analysis, adapting and modifying it conveniently, to reach our aim: the computation of accurate forecasts for the aggregated wind power production in a country for a forecasting horizon as long as possible, particularly up to 60 days (2 months). We illustrate this methodology and the results obtained for real data in the leading country in wind power production: Denmark

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El sistema de energía eólica-diesel híbrido tiene un gran potencial en la prestación de suministro de energía a comunidades remotas. En comparación con los sistemas tradicionales de diesel, las plantas de energía híbridas ofrecen grandes ventajas tales como el suministro de capacidad de energía extra para "microgrids", reducción de los contaminantes y emisiones de gases de efecto invernadero, y la cobertura del riesgo de aumento inesperado del precio del combustible. El principal objetivo de la presente tesis es proporcionar nuevos conocimientos para la evaluación y optimización de los sistemas de energía híbrido eólico-diesel considerando las incertidumbres. Dado que la energía eólica es una variable estocástica, ésta no puede ser controlada ni predecirse con exactitud. La naturaleza incierta del viento como fuente de energía produce serios problemas tanto para la operación como para la evaluación del valor del sistema de energía eólica-diesel híbrido. Por un lado, la regulación de la potencia inyectada desde las turbinas de viento es una difícil tarea cuando opera el sistema híbrido. Por otro lado, el bene.cio económico de un sistema eólico-diesel híbrido se logra directamente a través de la energía entregada a la red de alimentación de la energía eólica. Consecuentemente, la incertidumbre de los recursos eólicos incrementa la dificultad de estimar los beneficios globales en la etapa de planificación. La principal preocupación del modelo tradicional determinista es no tener en cuenta la incertidumbre futura a la hora de tomar la decisión de operación. Con lo cual, no se prevé las acciones operativas flexibles en respuesta a los escenarios futuros. El análisis del rendimiento y simulación por ordenador en el Proyecto Eólico San Cristóbal demuestra que la incertidumbre sobre la energía eólica, las estrategias de control, almacenamiento de energía, y la curva de potencia de aerogeneradores tienen un impacto significativo sobre el rendimiento del sistema. En la presente tesis, se analiza la relación entre la teoría de valoración de opciones y el proceso de toma de decisiones. La opción real se desarrolla con un modelo y se presenta a través de ejemplos prácticos para evaluar el valor de los sistemas de energía eólica-diesel híbridos. Los resultados muestran que las opciones operacionales pueden aportar un valor adicional para el sistema de energía híbrida, cuando esta flexibilidad operativa se utiliza correctamente. Este marco se puede aplicar en la optimización de la operación a corto plazo teniendo en cuenta la naturaleza dependiente de la trayectoria de la política óptima de despacho, dadas las plausibles futuras realizaciones de la producción de energía eólica. En comparación con los métodos de valoración y optimización existentes, el resultado del caso de estudio numérico muestra que la política de operación resultante del modelo de optimización propuesto presenta una notable actuación en la reducción del con- sumo total de combustible del sistema eólico-diesel. Con el .n de tomar decisiones óptimas, los operadores de plantas de energía y los gestores de éstas no deben centrarse sólo en el resultado directo de cada acción operativa, tampoco deberían tomar decisiones deterministas. La forma correcta es gestionar dinámicamente el sistema de energía teniendo en cuenta el valor futuro condicionado en cada opción frente a la incertidumbre. ABSTRACT Hybrid wind-diesel power systems have a great potential in providing energy supply to remote communities. Compared with the traditional diesel systems, hybrid power plants are providing many advantages such as providing extra energy capacity to the micro-grid, reducing pollution and greenhouse-gas emissions, and hedging the risk of unexpected fuel price increases. This dissertation aims at providing novel insights for assessing and optimizing hybrid wind-diesel power systems considering the related uncertainties. Since wind power can neither be controlled nor accurately predicted, the energy harvested from a wind turbine may be considered a stochastic variable. This uncertain nature of wind energy source results in serious problems for both the operation and value assessment of the hybrid wind-diesel power system. On the one hand, regulating the uncertain power injected from wind turbines is a difficult task when operating the hybrid system. On the other hand, the economic profit of a hybrid wind-diesel system is achieved directly through the energy delivered to the power grid from the wind energy. Therefore, the uncertainty of wind resources has increased the difficulty in estimating the total benefits in the planning stage. The main concern of the traditional deterministic model is that it does not consider the future uncertainty when making the dispatch decision. Thus, it does not provide flexible operational actions in response to the uncertain future scenarios. Performance analysis and computer simulation on the San Cristobal Wind Project demonstrate that the wind power uncertainty, control strategies, energy storage, and the wind turbine power curve have a significant impact on the performance of the system. In this dissertation, the relationship between option pricing theory and decision making process is discussed. A real option model is developed and presented through practical examples for assessing the value of hybrid wind-diesel power systems. Results show that operational options can provide additional value to the hybrid power system when this operational flexibility is correctly utilized. This framework can be applied in optimizing short term dispatch decisions considering the path-dependent nature of the optimal dispatch policy, given the plausible future realizations of the wind power production. Comparing with the existing valuation and optimization methods, result from numerical example shows that the dispatch policy resulting from the proposed optimization model exhibits a remarkable performance in minimizing the total fuel consumption of the wind-diesel system. In order to make optimal decisions, power plant operators and managers should not just focus on the direct outcome of each operational action; neither should they make deterministic decisions. The correct way is to dynamically manage the power system by taking into consideration the conditional future value in each option in response to the uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an approach for the detection, localization and following of dynamic terrestrial objects using a mini-UAV. The development is intended to be used for surveillance of large infrastructures. The detection algorithm is based on finding several pre-defined characteristics of the target, such as color, shape and size. The process used to localize the target, once it is detected, is based on an inversion of the Pinhole camera model. The task of following the Summit XL was designed to keep the target inside the field of view of the camera, and it was implemented in the form of a PID controller. The system has been tested both in simulation and with real robots, showing promising results.