63 resultados para self-deployment algorithms
Resumo:
Remote labs offer many unique advantages to students as they provide opportunities to access experiments and learning scenarios that would be otherwise unavailable. At the same time, however, these opportunities introduce real challenges to the institutions hosting the remote labs. This paper draws on the experiences of the REXNET project consortium to expose a number of these issues as a means of furthering the debate on the value of remote labs and the best practices in deploying them. The paper presents a brief outline of the various types of remote lab scenarios that might be deployed. It then describes the key human and technological actors that have an interest in or are intrinsic to a remote lab instance, with a description of the role of each actor and their interest. Some relationships between these various actors are then discussed with some factors that might influence those relationships. Finally some general issues are briefly described.
Resumo:
Current Manufacturing Systems challenges due to international economic crisis, market globalization and e-business trends, incites the development of intelligent systems to support decision making, which allows managers to concentrate on high-level tasks management while improving decision response and effectiveness towards manufacturing agility. This paper presents a novel negotiation mechanism for dynamic scheduling based on social and collective intelligence. Under the proposed negotiation mechanism, agents must interact and collaborate in order to improve the global schedule. Swarm Intelligence (SI) is considered a general aggregation term for several computational techniques, which use ideas and inspiration from the social behaviors of insects and other biological systems. This work is primarily concerned with negotiation, where multiple self-interested agents can reach agreement over the exchange of operations on competitive resources. Experimental analysis was performed in order to validate the influence of negotiation mechanism in the system performance and the SI technique. Empirical results and statistical evidence illustrate that the negotiation mechanism influence significantly the overall system performance and the effectiveness of Artificial Bee Colony for makespan minimization and on the machine occupation maximization.
Resumo:
Computerized scheduling methods and computerized scheduling systems according to exemplary embodiments. A computerized scheduling method may be stored in a memory and executed on one or more processors. The method may include defining a main multi-machine scheduling problem as a plurality of single machine scheduling problems; independently solving the plurality of single machine scheduling problems thereby calculating a plurality of near optimal single machine scheduling problem solutions; integrating the plurality of near optimal single machine scheduling problem solutions into a main multi-machine scheduling problem solution; and outputting the main multi-machine scheduling problem solution.
Resumo:
Power law (PL) distributions have been largely reported in the modeling of distinct real phenomena and have been associated with fractal structures and self-similar systems. In this paper, we analyze real data that follows a PL and a double PL behavior and verify the relation between the PL coefficient and the capacity dimension of known fractals. It is to be proved a method that translates PLs coefficients into capacity dimension of fractals of any real data.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising two different types of processors—such a platform is referred to as two-type platform. We present two low degree polynomial time-complexity algorithms, SA and SA-P, each providing the following guarantee. For a given two-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then (i) using SA, it is guaranteed to find such an assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which processors are 1+α times faster. The parameter 0<α≤1 is a property of the task set; it is the maximum of all the task utilizations that are no greater than 1. We evaluate average-case performance of both the algorithms by generating task sets randomly and measuring how much faster processors the algorithms need (which is upper bounded by 1+α/2 for SA and 1+α for SA-P) in order to output a feasible task assignment (intra-migrative for SA and non-migrative for SA-P). In our evaluations, for the vast majority of task sets, these algorithms require significantly smaller processor speedup than indicated by their theoretical bounds. Finally, we consider a special case where no task utilization in the given task set can exceed one and for this case, we (re-)prove the performance guarantees of SA and SA-P. We show, for both of the algorithms, that changing the adversary from intra-migrative to a more powerful one, namely fully-migrative, in which tasks can migrate between processors of any type, does not deteriorate the performance guarantees. For this special case, we compare the average-case performance of SA-P and a state-of-the-art algorithm by generating task sets randomly. In our evaluations, SA-P outperforms the state-of-the-art by requiring much smaller processor speedup and by running orders of magnitude faster.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising a constant number (denoted by t) of distinct types of processors—such a platform is referred to as a t-type platform. We present two algorithms, LPGIM and LPGNM, each providing the following guarantee. For a given t-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet their deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then: (i) LPGIM succeeds in finding such an assignment where the same restriction on task migration applies (intra-migrative) but given a platform in which only one processor of each type is 1 + α × t-1/t times faster and (ii) LPGNM succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which every processor is 1 + α times faster. The parameter α is a property of the task set; it is the maximum of all the task utilizations that are no greater than one. To the best of our knowledge, for t-type heterogeneous multiprocessors: (i) for the problem of intra-migrative task assignment, no previous algorithm exists with a proven bound and hence our algorithm, LPGIM, is the first of its kind and (ii) for the problem of non-migrative task assignment, our algorithm, LPGNM, has superior performance compared to state-of-the-art.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
6th Real-Time Scheduling Open Problems Seminar (RTSOPS 2015), Lund, Sweden.
Resumo:
Power law (PL) distributions have been largely reported in the modeling of distinct real phenomena and have been associated with fractal structures and self-similar systems. In this paper, we analyze real data that follows a PL and a double PL behavior and verify the relation between the PL coefficient and the capacity dimension of known fractals. It is to be proved a method that translates PLs coefficients into capacity dimension of fractals of any real data.
Resumo:
Advances in technology have produced more and more intricate industrial systems, such as nuclear power plants, chemical centers and petroleum platforms. Such complex plants exhibit multiple interactions among smaller units and human operators, rising potentially disastrous failure, which can propagate across subsystem boundaries. This paper analyzes industrial accident data-series in the perspective of statistical physics and dynamical systems. Global data is collected from the Emergency Events Database (EM-DAT) during the time period from year 1903 up to 2012. The statistical distributions of the number of fatalities caused by industrial accidents reveal Power Law (PL) behavior. We analyze the evolution of the PL parameters over time and observe a remarkable increment in the PL exponent during the last years. PL behavior allows prediction by extrapolation over a wide range of scales. In a complementary line of thought, we compare the data using appropriate indices and use different visualization techniques to correlate and to extract relationships among industrial accident events. This study contributes to better understand the complexity of modern industrial accidents and their ruling principles.
Resumo:
This paper analyses the performance of a Genetic Algorithm using two new concepts, namely a static fitness function including a discontinuity measure and a fractional-order dynamic fitness function, for the synthesis of combinational logic circuits. In both cases, experiments reveal superior results in terms of speed and convergence to achieve a solution.
Resumo:
The theory of fractional calculus goes back to the beginning of thr throry of differential calculus but its inherent complexity postponed the applications of the associated concepts. In the last decade the progress in the areas of chaos and fractals revealed subtle relationships with the fractional calculus leading to an increasing interest in the development of the new paradigm. In the area of automaticcontrol preliminary work has already been carried out but the proposed algorithms are restricted to the frequency domain. The paper discusses the design of fractional-order discrete-time controllers. The algorithms studied adopt the time domein, which makes them suited for z-transform analusis and discrete-time implementation. The performance of discrete-time fractional-order controllers with linear and non-linear systems is also investigated.
Resumo:
The vision of the Internet of Things (IoT) includes large and dense deployment of interconnected smart sensing and monitoring devices. This vast deployment necessitates collection and processing of large volume of measurement data. However, collecting all the measured data from individual devices on such a scale may be impractical and time consuming. Moreover, processing these measurements requires complex algorithms to extract useful information. Thus, it becomes imperative to devise distributed information processing mechanisms that identify application-specific features in a timely manner and with a low overhead. In this article, we present a feature extraction mechanism for dense networks that takes advantage of dominance-based medium access control (MAC) protocols to (i) efficiently obtain global extrema of the sensed quantities, (ii) extract local extrema, and (iii) detect the boundaries of events, by using simple transforms that nodes employ on their local data. We extend our results for a large dense network with multiple broadcast domains (MBD). We discuss and compare two approaches for addressing the challenges with MBD and we show through extensive evaluations that our proposed distributed MBD approach is fast and efficient at retrieving the most valuable measurements, independent of the number sensor nodes in the network.
Resumo:
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.