975 resultados para Scheduling Systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we generalize the Continuous Adversarial Queuing Theory (CAQT) model (Blesa et al. in MFCS, Lecture Notes in Computer Science, vol. 3618, pp. 144–155, 2005) by considering the possibility that the router clocks in the network are not synchronized. We name the new model Non Synchronized CAQT (NSCAQT). Clearly, this new extension to the model only affects those scheduling policies that use some form of timing. In a first approach we consider the case in which although not synchronized, all clocks run at the same speed, maintaining constant differences. In this case we show that all universally stable policies in CAQT that use the injection time and the remaining path to schedule packets remain universally stable. These policies include, for instance, Shortest in System (SIS) and Longest in System (LIS). Then, we study the case in which clock differences can vary over time, but the maximum difference is bounded. In this model we show the universal stability of two families of policies related to SIS and LIS respectively (the priority of a packet in these policies depends on the arrival time and a function of the path traversed). The bounds we obtain in this case depend on the maximum difference between clocks. This is a necessary requirement, since we also show that LIS is not universally stable in systems without bounded clock difference. We then present a new policy that we call Longest in Queues (LIQ), which gives priority to the packet that has been waiting the longest in edge queues. This policy is universally stable and, if clocks maintain constant differences, the bounds we prove do not depend on them. To finish, we provide with simulation results that compare the behavior of some of these policies in a network with stochastic injection of packets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we generalize the Continuous Adversarial Queuing Theory (CAQT) model (Blesa et al. in MFCS, Lecture Notes in Computer Science, vol. 3618, pp. 144–155, 2005) by considering the possibility that the router clocks in the network are not synchronized. We name the new model Non Synchronized CAQT (NSCAQT). Clearly, this new extension to the model only affects those scheduling policies that use some form of timing. In a first approach we consider the case in which although not synchronized, all clocks run at the same speed, maintaining constant differences. In this case we show that all universally stable policies in CAQT that use the injection time and the remaining path to schedule packets remain universally stable. These policies include, for instance, Shortest in System (SIS) and Longest in System (LIS). Then, we study the case in which clock differences can vary over time, but the maximum difference is bounded. In this model we show the universal stability of two families of policies related to SIS and LIS respectively (the priority of a packet in these policies depends on the arrival time and a function of the path traversed). The bounds we obtain in this case depend on the maximum difference between clocks. This is a necessary requirement, since we also show that LIS is not universally stable in systems without bounded clock difference. We then present a new policy that we call Longest in Queues (LIQ), which gives priority to the packet that has been waiting the longest in edge queues. This policy is universally stable and, if clocks maintain constant differences, the bounds we prove do not depend on them. To finish, we provide with simulation results that compare the behavior of some of these policies in a network with stochastic injection of packets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interactions among three important issues involved in the implementation of logic programs in parallel (goal scheduling, precedence, and memory management) are discussed. A simplified, parallel memory management model and an efficient, load-balancing goal scheduling strategy are presented. It is shown how, for systems which support "don't know" non-determinism, special care has to be taken during goal scheduling if the space recovery characteristics of sequential systems are to be preserved. A solution based on selecting only "newer" goals for execution is described, and an algorithm is proposed for efficiently maintaining and determining precedence relationships and variable ages across parallel goals. It is argued that the proposed schemes and algorithms make it possible to extend the storage performance of sequential systems to parallel execution without the considerable overhead previously associated with it. The results are applicable to a wide class of parallel and coroutining systems, and they represent an efficient alternative to "all heap" or "spaghetti stack" allocation models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we examine the issue of memory management in the parallel execution of logic programs. We concentrate on non-deterministic and-parallel schemes which we believe present a relatively general set of problems to be solved, including most of those encountered in the memory management of or-parallel systems. We present a distributed stack memory management model which allows flexible scheduling of goals. Previously proposed models (based on the "Marker model") are lacking in that they impose restrictions on the selection of goals to be executed or they may require consume a large amount of virtual memory. This paper first presents results which imply that the above mentioned shortcomings can have significant performance impacts. An extension of the Marker Model is then proposed which allows flexible scheduling of goals while keeping (virtual) memory consumption down. Measurements are presented which show the advantage of this solution. Methods for handling forward and backward execution, cut and roll back are discussed in the context of the proposed scheme. In addition, the paper shows how the same mechanism for flexible scheduling can be applied to allow the efficient handling of the very general form of suspension that can occur in systems which combine several types of and-parallelism and more sophisticated methods of executing logic programs. We believe that the results are applicable to many and- and or-parallel systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adaptive embedded systems are required in various applications. This work addresses these needs in the area of adaptive image compression in FPGA devices. A simplified version of an evolution strategy is utilized to optimize wavelet filters of a Discrete Wavelet Transform algorithm. We propose an adaptive image compression system in FPGA where optimized memory architecture, parallel processing and optimized task scheduling allow reducing the time of evolution. The proposed solution has been extensively evaluated in terms of the quality of compression as well as the processing time. The proposed architecture reduces the time of evolution by 44% compared to our previous reports while maintaining the quality of compression unchanged with respect to existing implementations. The system is able to find an optimized set of wavelet filters in less than 2 min whenever the input type of data changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes an automatic framework for the seamless integration of hardware accelerators, starting from an OpenMP-based application and an XML file describing the HW/SW partitioning. It extends a fully software architecture by generating and integrating the cores, along with the proper interfaces, and the code for scheduling and synchronization. Experimental results show that it is possible to validate different solutions only by varying the input code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Partitioning is a common approach to developing mixed-criticality systems, where partitions are isolated from each other both in the temporal and the spatial domain in order to prevent low-criticality subsystems from compromising other subsystems with high level of criticality in case of misbehaviour. The advent of many-core processors, on the other hand, opens the way to highly parallel systems in which all partitions can be allocated to dedicated processor cores. This trend will simplify processor scheduling, although other issues such as mutual interference in the temporal domain may arise as a consequence of memory and device sharing. The paper describes an architecture for multi-core partitioned systems including critical subsystems built with the Ada Ravenscar profile. Some implementation issues are discussed, and experience on implementing the ORK kernel on the XtratuM partitioning hypervisor is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mode switches are used to partition the system’s behavior into different modes to reduce the complexity of large embedded systems. Such systems operate in multiple modes in which each one corresponds to a specific application scenario; these are called Multi-Mode Systems (MMS). A different piece of software is normally executed for each mode. At any given time, the system can be in one of the predefined modes and then be switched to another as a result of a certain condition. A mode switch mechanism (or mode change protocol) is used to shift the system from one mode to another at run-time. In this thesis we have used a hierarchical scheduling framework to implement a multi-mode system called Multi-Mode Hierarchical Scheduling Framework (MMHSF). A two-level Hierarchical Scheduling Framework (HSF) has already been implemented in an open source real-time operating system, FreeRTOS, to support temporal isolation among real-time components. The main contribution of this thesis is the extension of the HSF featuring a multimode feature with an emphasis on making minimal changes in the underlying operating system (FreeRTOS) and its HSF implementation. Our implementation uses fixed-priority preemptive scheduling at both local and global scheduling levels and idling periodic servers. It also now supports different modes of the system which can be switched at run-time. Each subsystem and task exhibit different timing attributes according to mode, and upon a Mode Change Request (MCR) the task-set and timing interfaces of the entire system (including subsystems and tasks) undergo a change. A Mode Change Protocol specifies precisely how the system-mode will be changed. However, an application may not only need to change a mode but also a different mode change protocol semantic. For example, the mode change from normal to shutdown can allow all the tasks to be completed before the mode itself is changed, while changing a mode from normal to emergency may require aborting all tasks instantly. In our work, both the system mode and the mode change protocol can be changed at run-time. We have implemented three different mode change protocols to switch from one mode to another: the Suspend/resume protocol, the Abort protocol, and the Complete protocol. These protocols increase the flexibility of the system, allowing users to select the way they want to switch to a new mode. The implementation of MMHSF is tested and evaluated on an AVR-based 32 bit board EVK1100 with an AVR32UC3A0512 micro-controller. We have tested the behavior of each system mode and for each mode change protocol. We also provide the results for the performance measures of all mode change protocols in the thesis. RESUMEN Los conmutadores de modo son usados para particionar el comportamiento del sistema en diferentes modos, reduciendo así la complejidad de grandes sistemas empotrados. Estos sistemas tienen multiples modos de operación, cada uno de ellos correspondiente a distintos escenarios y para distintas aplicaciones; son llamados Sistemas Multimodales (o en inglés “Multi-Mode Systems” o MMS). Normalmente cada modo ejecuta una parte de código distinto. En un momento dado el sistema, que está en un modo concreto, puede ser cambiado a otro modo distinto como resultado de alguna condicion impuesta previamente. Los mecanismos de cambio de modo (o protocolos de cambio de modo) son usados para mover el sistema de un modo a otro durante el tiempo de ejecución. En este trabajo se ha usado un modelo de sistema operativo para implementar un sistema multimodo llamado MMHSF, siglas en inglés correspondientes a (Multi-Mode Hierarchical Scheduling Framework). Este sistema está basado en el HSF (Hierarchical Scheduling Framework), un modelo de sistema operativo con jerarquía de dos niveles, implementado en un sistema operativo en tiempo real de libre distribución llamado FreeRTOS, capaz de permitir el aislamiento temporal entre componentes. La principal contribución de este trabajo es la ampliación del HSF convirtiendolo en un sistema multimodo realizando los cambios mínimos necesarios sobre el sistema operativo FreeRTOS y la implementación ya existente del HSF. Esta implementación usa un sistema de planificación de prioridad fija para ambos niveles de jerarquía, ocupando el tiempo entre tareas con un “modo reposo”. Además el sistema es capaz de cambiar de un modo a otro en tiempo de ejecución. Cada subsistema y tarea son capaces de tener distintos atributos de tiempo (prioridad, periodo y tiempo de ejecución) en función del modo. Bajo una demanda de cambio de modo (Mode Change Request MCR) se puede variar el set de tareas en ejecución, así como los atributos de los servidores y las tareas. Un protocolo de cambio de modo espeficica precisamente cómo será cambiado el sistema de un modo a otro. Sin embargo una aplicación puede requerir no solo un cambio de modo, sino que lo haga de una forma especifica. Por ejemplo, el cambio de modo de “normal” a “apagado” puede permitir a las tareas en ejecución ser finalizadas antes de que se complete la transición, pero sin embargo el cambio de “normal” a “emergencia” puede requerir abortar todas las tareas instantaneamente. En este trabajo ambas características, tanto el modo como el protocolo de cambio, pueden ser cambiadas en tiempo de ejecución, pero deben ser previamente definidas por el desarrollador. Han sido definidos tres protocolos de cambios: el protocolo “suspender/continuar”, protocolo “abortar” y el protocolo “completar”. Estos protocolos incrementan la flexibilidad del sistema, permitiendo al usuario seleccionar de que forma quieren cambiar hacia el nuevo modo. La implementación del MMHSF ha sido testada y evaluada en una placa AVR EVK1100, con un micro-controlador AVR32UC3A0. Se ha comprobado el comportamiento de los distintos modos para los distintos protocolos, definidos previamente. Como resultado se proporcionan las medidades de rendimiento de los distintos protocolos de cambio de modo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current approach to developing mixed-criticality sys- tems is by partitioning the hardware resources (processors, memory and I/O devices) among the different applications. Partitions are isolated from each other both in the temporal and the spatial domain, so that low-criticality applications cannot compromise other applications with a higher level of criticality in case of misbehaviour. New architectures based on many-core processors open the way to highly parallel systems in which each partition can be allocated to a set of dedicated proces- sor cores, thus simplifying partition scheduling and temporal separation. Moreover, spatial isolation can also benefit from many-core architectures, by using simpler hardware mechanisms to protect the address spaces of different applications. This paper describes an architecture for many- core embedded partitioned systems, together with some implementation advice for spatial isolation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Federal Transit Administration, Washington, D.C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Federal Transit Administration, Washington, D.C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis--University of Illinois at Urbana-Champaign.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an experiment in designing, implementing and testing a Transport layer cluster scheduling and dispatching architecture. The motivation for the experiment was the hypothesis that a Transport layer clustering solution may offer advantantages over the existing industry-standard Network layer and Data Link Layer approaches. The critical success factors initially established to guide and evaluate the experiment were reduced dispatcher work load, reduced dispatcher internal state memory requirements, distributed denial of service resilience, and cluster software design simplicity. The functional design stage of the experiment produced a Transport layer strategy for scheduling and load balancing based on the specification of two new TCP options. Implementation required the introduction of the newly specified TCP options into the Linux (2.4) kernel. The implementation produced an extended Linux Socket API to facilitate user-process access to the additional TCP capability. The testing stage of the experiment confirmed the operational efficiency of the solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper formulates several mathematical models for determining the optimal sequence of component placements and assignment of component types to feeders simultaneously or the integrated scheduling problem for a type of surface mount technology placement machines, called the sequential pick-andplace (PAP) machine. A PAP machine has multiple stationary feeders storing components, a stationary working table holding a printed circuit board (PCB), and a movable placement head to pick up components from feeders and place them to a board. The objective of integrated problem is to minimize the total distance traveled by the placement head. Two integer nonlinear programming models are formulated first. Then, each of them is equivalently converted into an integer linear type. The models for the integrated problem are verified by two commercial packages. In addition, a hybrid genetic algorithm previously developed by the authors is adopted to solve the models. The algorithm not only generates the optimal solutions quickly for small-sized problems, but also outperforms the genetic algorithms developed by other researchers in terms of total traveling distance.