985 resultados para Non-preemptive scheduling
Resumo:
In real-time systems, there are two distinct trends for scheduling task sets on unicore systems: non-preemptive and preemptive scheduling. Non-preemptive scheduling is obviously not subject to any preemption delay but its schedulability may be quite poor, whereas fully preemptive scheduling is subject to preemption delay, but benefits from a higher flexibility in the scheduling decisions. The time-delay involved by task preemptions is a major source of pessimism in the analysis of the task Worst-Case Execution Time (WCET) in real-time systems. Preemptive scheduling policies including non-preemptive regions are a hybrid solution between non-preemptive and fully preemptive scheduling paradigms, which enables to conjugate both world's benefits. In this paper, we exploit the connection between the progression of a task in its operations, and the knowledge of the preemption delays as a function of its progression. The pessimism in the preemption delay estimation is then reduced in comparison to state of the art methods, due to the increase in information available in the analysis.
Resumo:
Recent embedded processor architectures containing multiple heterogeneous cores and non-coherent caches renewed attention to the use of Software Transactional Memory (STM) as a building block for developing parallel applications. STM promises to ease concurrent and parallel software development, but relies on the possibility of abort conflicting transactions to maintain data consistency, which in turns affects the execution time of tasks carrying transactions. Because of this fact the timing behaviour of the task set may not be predictable, thus it is crucial to limit the execution time overheads resulting from aborts. In this paper we formalise a FIFO-based algorithm to order the sequence of commits of concurrent transactions. Then, we propose and evaluate two non-preemptive and one SRP-based fully-preemptive scheduling strategies, in order to avoid transaction starvation.
Resumo:
In embedded systems, the timing behaviour of the control mechanisms are sometimes of critical importance for the operational safety. These high criticality systems require strict compliance with the offline predicted task execution time. The execution of a task when subject to preemption may vary significantly in comparison to its non-preemptive execution. Hence, when preemptive scheduling is required to operate the workload, preemption delay estimation is of paramount importance. In this paper a preemption delay estimation method for floating non-preemptive scheduling policies is presented. This work builds on [1], extending the model and optimising it considerably. The preemption delay function is subject to a major tightness improvement, considering the WCET analysis context. Moreover more information is provided as well in the form of an extrinsic cache misses function, which enables the method to provide a solution in situations where the non-preemptive regions sizes are small. Finally experimental results from the implementation of the proposed solutions in Heptane are provided for real benchmarks which validate the significance of this work.
The utilization bound of non-preemptive rate-monotonic scheduling in controller area networks is 25%
Resumo:
Consider a distributed computer system comprising many computer nodes, each interconnected with a controller area network (CAN) bus. We prove that if priorities to message streams are assigned using rate-monotonic (RM) and if the requested capacity of the CAN bus does not exceed 25% then all deadlines are met.
Resumo:
We consider the problem of scheduling a multi-mode real-time system upon identical multiprocessor platforms. Since it is a multi-mode system, the system can change from one mode to another such that the current task set is replaced with a new task set. Ensuring that deadlines are met requires not only that a schedulability test is performed on tasks in each mode but also that (i) a protocol for transitioning from one mode to another is specified and (ii) a schedulability test for each transition is performed. We propose two protocols which ensure that all the expected requirements are met during every transition between every pair of operating modes of the system. Moreover, we prove the correctness of our proposed algorithms by extending the theory about the makespan determination problem.
Resumo:
In this thesis we have studied a few models involving self-generation of priorities. Priority queues have been extensively discussed in literature. However, these are situations involving priority assigned to (or possessed by) customers at the time of their arrival. Nevertheless, customers generating into priority is a common phenomena. Such situations especially arise at a physicians clinic, aircrafts hovering over airport running out of fuel but waiting for clearance to land and in several communication systems. Quantification of these are very little seen in literature except for those cited in some of the work indicated in the introduction. Our attempt is to quantify a few of such problems. In doing so, we have also generalized the classical priority queues by introducing priority generation ( going to higher priorities and during waiting). Systematically we have proceeded from single server queue to multi server queue. We also introduced customers with repeated attempts (retrial) generating priorities. All models that were analyzed in this thesis involve nonpreemptive service. Since the models are not analytically tractable, a large number of numerical illustrations were produced in each chapter to get a feel about the working of the systems.
Resumo:
While the earliest deadline first algorithm is known to be optimal as a uniprocessor scheduling policy, the implementation comes at a cost in terms of complexity. Fixed taskpriority algorithms on the other hand have lower complexity but higher likelihood of task sets being declared unschedulable, when compared to earliest deadline first (EDF). Various attempts have been undertaken to increase the chances of proving a task set schedulable with similar low complexity. In some cases, this was achieved by modifying applications to limit preemptions, at the cost of flexibility. In this work, we explore several variants of a concept to limit interference by locking down the ready queue at certain instances. The aim is to increase the prospects of schedulability of a given task system, without compromising on complexity or flexibility, when compared to the regular fixed task-priority algorithm. As a final contribution, a new preemption threshold assignment algorithm is provided which is less complex and more straightforward than the previous method available in the literature.
Resumo:
Fieldbus communication networks aim to interconnect sensors, actuators and controllers within distributed computer-controlled systems. Therefore, they constitute the foundation upon which real-time applications are to be implemented. A specific class of fieldbus communication networks is based on a simplified version of token-passing protocols, where each station may transfer, at most, a single message per token visit (SMTV). In this paper, we establish an analogy between non-preemptive task scheduling in single processors and the scheduling of messages on SMTV token-passing networks. Moreover, we clearly show that concepts such as blocking and interference in non-preemptive task scheduling have their counterparts in the scheduling of messages on SMTV token-passing networks. Based on this task/message scheduling analogy, we provide pre-run-time schedulability conditions for supporting real-time messages with SMTV token-passing networks. We provide both utilisation-based and response time tests to perform the pre-run-time schedulability analysis of real-time messages on SMTV token-passing networks, considering RM/DM (rate monotonic/deadline monotonic) and EDF (earliest deadline first) priority assignment schemes
Resumo:
This paper addresses the non-preemptive single machine scheduling problem to minimize total tardiness. We are interested in the online version of this problem, where orders arrive at the system at random times. Jobs have to be scheduled without knowledge of what jobs will come afterwards. The processing times and the due dates become known when the order is placed. The order release date occurs only at the beginning of periodic intervals. A customized approximate dynamic programming method is introduced for this problem. The authors also present numerical experiments that assess the reliability of the new approach and show that it performs better than a myopic policy.
Resumo:
In this paper we survey the most relevant results for the prioritybased schedulability analysis of real-time tasks, both for the fixed and dynamic priority assignment schemes. We give emphasis to the worst-case response time analysis in non-preemptive contexts, which is fundamental for the communication schedulability analysis. We define an architecture to support priority-based scheduling of messages at the application process level of a specific fieldbus communication network, the PROFIBUS. The proposed architecture improves the worst-case messages’ response time, overcoming the limitation of the first-come-first-served (FCFS) PROFIBUS queue implementations.
Resumo:
This paper studies static-priority preemptive scheduling on a multiprocessor using partitioned scheduling. We propose a new scheduling algorithm and prove that if the proposed algorithm is used and if less than 50% of the capacity is requested then all deadlines are met. It is known that for every static-priority multiprocessor scheduling algorithm, there is a task set that misses a deadline although the requested capacity is arbitrary close to 50%.
Resumo:
WiDom is a wireless prioritized medium access control protocol which offers very large number of priority levels. Hence, it brings the potential to employ non-preemptive static-priority scheduling and schedulability analysis for a wireless channel assuming that the overhead of WiDom is modeled properly. Recent research has created a new version of WiDom (we call it: Slotted WiDom) which offers lower overhead compared to the previous version. In this paper we propose a new schedulability analysis for slotted WiDom and extend it to work for message streams with release jitter. Furthermore, to provide an accurate timing analysis, we must include the effect of transmission faults on message latencies. Thus, in the proposed analysis we consider the existence of different noise sources and develop the analysis for the case where messages are transmitted under noisy wireless channels. Evaluation of the proposed analysis is done by testing the slotted WiDom in two different modes on a real test-bed. The results from the experiments provide a firm validation on our findings.
Resumo:
Consider the problem of non-migratively scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a two-type heterogeneous multiprocessor platform. We ask the following question: Does there exist a phase transition behavior for the two-type heterogeneous multiprocessor scheduling problem? We also provide some initial observations via simulations performed on randomly generated task sets.
Resumo:
It is widely assumed that scheduling real-time tasks becomes more difficult as their deadlines get shorter. With deadlines shorter, however, tasks potentially compete less with each other for processors, and this could produce more contention-free slots at which the number of competing tasks is smaller than or equal to the number of available processors. This paper presents a policy (called CF policy) that utilizes such contention-free slots effectively. This policy can be employed by any work-conserving, preemptive scheduling algorithm, and we show that any algorithm extended with this policy dominates the original algorithm in terms of schedulability. We also present improved schedulability tests for algorithms that employ this policy, based on the observation that interference from tasks is reduced when their executions are postponed to contention-free slots. Finally, using the properties of the CF policy, we derive a counter-intuitive claim that shortening of task deadlines can help improve schedulability of task systems. We present heuristics that effectively reduce task deadlines for better scheduability without performing any exhaustive search.
Resumo:
Preemptions account for a non-negligible overhead during system execution. There has been substantial amount of research on estimating the delay incurred due to the loss of working sets in the processor state (caches, registers, TLBs) and some on avoiding preemptions, or limiting the preemption cost. We present an algorithm to reduce preemptions by further delaying the start of execution of high priority tasks in fixed priority scheduling. Our approaches take advantage of the floating non-preemptive regions model and exploit the fact that, during the schedule, the relative task phasing will differ from the worst-case scenario in terms of admissible preemption deferral. Furthermore, approximations to reduce the complexity of the proposed approach are presented. Substantial set of experiments demonstrate that the approach and approximations improve over existing work, in particular for the case of high utilisation systems, where savings of up to 22% on the number of preemption are attained.