933 resultados para real-time scheduling algorithm
Resumo:
Hand and finger tracking has a major importance in healthcare, for rehabilitation of hand function required due to a neurological disorder, and in virtual environment applications, like characters animation for on-line games or movies. Current solutions consist mostly of motion tracking gloves with embedded resistive bend sensors that most often suffer from signal drift, sensor saturation, sensor displacement and complex calibration procedures. More advanced solutions provide better tracking stability, but at the expense of a higher cost. The proposed solution aims to provide the required precision, stability and feasibility through the combination of eleven inertial measurements units (IMUs). Each unit captures the spatial orientation of the attached body. To fully capture the hand movement, each finger encompasses two units (at the proximal and distal phalanges), plus one unit at the back of the hand. The proposed glove was validated in two distinct steps: a) evaluation of the sensors’ accuracy and stability over time; b) evaluation of the bending trajectories during usual finger flexion tasks based on the intra-class correlation coefficient (ICC). Results revealed that the glove was sensitive mainly to magnetic field distortions and sensors tuning. The inclusion of a hard and soft iron correction algorithm and accelerometer and gyro drift and temperature compensation methods provided increased stability and precision. Finger trajectories evaluation yielded high ICC values with an overall reliability within application’s satisfying limits. The developed low cost system provides a straightforward calibration and usability, qualifying the device for hand and finger tracking in healthcare and animation industries.
Resumo:
Embedded systems are increasingly complex and dynamic, imposing progressively higher developing time and costs. Tuning a particular system for deployment is thus becoming more demanding. Furthermore when considering systems which have to adapt themselves to evolving requirements and changing service requests. In this perspective, run-time monitoring of the system behaviour becomes an important requirement, allowing to dynamically capturing the actual scheduling progress and resource utilization. For this to succeed, operating systems need to expose their internal behaviour and state, making it available to external applications, and a runtime monitoring mechanism must be available. However, such mechanism can impose a burden in the system itself if not wisely used. In this paper we explore this problem and propose a framework, which is intended to provide this run-time mechanism whilst achieving code separation, run-time efficiency and flexibility for the final developer.
Resumo:
Ethernet is the most popular LAN technology. Its low price and robustness, resulting from its wide acceptance and deployment, has created an eagerness to expand its responsibilities to the factory-floor, where real-time requirements are to be fulfilled. However, it is difficult to build a real-time control network using Ethernet, because its MAC protocol, the 1-persistent CSMA/CD protocol with the BEB collision resolution algorithm, has unpredictable delay characteristics. Many anticipate that the recent technological advances in Ethernet such as the emerging Fast/Gigabit Ethernet, micro-segmentation and full-duplex operation using switches will also enable it to support time-critical applications. This technical report provides a comprehensive look at the unpredictability inherent to Ethernet and at recent technological advances towards real-time operation.
Resumo:
In this paper, we analyse the ability of P-NET [1] fieldbus to cope with the timing requirements of a Distributed Computer Control System (DCCS), where messages associated to discrete events should be made available within a maximum bound time. The main objective of this work is to analyse how the network access and queueing delays, imposed by P-NET’s virtual token Medium Access Control (MAC) mechanism, affect the realtime behaviour of the supported DCCS.
Resumo:
Consider the problem of scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a two-type heterogeneous multiprocessor platform where a task may request at most one of |R| shared resources. There are m1 processors of type-1 and m2 processors of type-2. Tasks may migrate only when requesting or releasing resources. We present a new algorithm, FF-3C-vpr, which offers a guarantee that if a task set is schedulable to meet deadlines by an optimal task assignment scheme that only allows tasks to migrate when requesting or releasing a resource, then FF-3Cvpr also meets deadlines if given processors 4+6*ceil(|R|/min(m1,m2)) times as fast. As far as we know, it is the first result for resource sharing on heterogeneous platforms with provable performance.
Resumo:
Real-time systems demand guaranteed and predictable run-time behaviour in order to ensure that no task has missed its deadline. Over the years we are witnessing an ever increasing demand for functionality enhancements in the embedded real-time systems. Along with the functionalities, the design itself grows more complex. Posed constraints, such as energy consumption, time, and space bounds, also require attention and proper handling. Additionally, efficient scheduling algorithms, as proven through analyses and simulations, often impose requirements that have significant run-time cost, specially in the context of multi-core systems. In order to further investigate the behaviour of such systems to quantify and compare these overheads involved, we have developed the SPARTS, a simulator of a generic embedded real- time device. The tasks in the simulator are described by externally visible parameters (e.g. minimum inter-arrival, sporadicity, WCET, BCET, etc.), rather than the code of the tasks. While our current implementation is primarily focused on our immediate needs in the area of power-aware scheduling, it is designed to be extensible to accommodate different task properties, scheduling algorithms and/or hardware models for the application in wide variety of simulations. The source code of the SPARTS is available for download at [1].
Resumo:
We present an algorithm for bandwidth allocation for delay-sensitive traffic in multi-hop wireless sensor networks. Our solution considers both periodic as well as aperiodic real-time traffic in an unified manner. We also present a distributed MAC protocol that conforms to the bandwidth allocation and thus satisfies the latency requirements of realtime traffic. Additionally, the protocol provides best-effort service to non real-time traffic. We derive the utilization bounds of our MAC protocol.
Resumo:
This paper focuses on the scheduling of tasks with hard and soft real-time constraints in open and dynamic real-time systems. It starts by presenting a capacity sharing and stealing (CSS) strategy that supports the coexistence of guaranteed and non-guaranteed bandwidth servers to efficiently handle soft-tasks’ overloads by making additional capacity available from two sources: (i) reclaiming unused reserved capacity when jobs complete in less than their budgeted execution time and (ii) stealing reserved capacity from inactive non-isolated servers used to schedule best-effort jobs. CSS is then combined with the concept of bandwidth inheritance to efficiently exchange reserved bandwidth among sets of inter-dependent tasks which share resources and exhibit precedence constraints, assuming no previous information on critical sections and computation times is available. The proposed Capacity Exchange Protocol (CXP) has a better performance and a lower overhead when compared against other available solutions and introduces a novel approach to integrate precedence constraints among tasks of open real-time systems.
Resumo:
The foreseen evolution of chip architectures to higher number of, heterogeneous, cores, with non-uniform memory and non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as an alternative to lock-based synchronisation. However, STM relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upperbounded and task sets can be feasibly scheduled. In this paper we defend the role of the transaction contention manager to reduce the number of transaction retries and to help the real-time scheduler assuring schedulability. For such purpose, the contention management policy should be aware of on-line scheduling information.
Resumo:
Synchronization is a challenging and important issue for time-sensitive Wireless Sensor Networks (WSN) since it requires a mutual spatiotemporal coordination between the nodes. In that concern, the IEEE 802.15.4/ZigBee protocols embody promising technologies for WSNs, but are still ambiguous on how to efficiently build synchronized multiple-cluster networks, specifically for the case of cluster-tree topologies. In fact, the current IEEE 802.15.4/ZigBee specifications restrict the synchronization to beacon-enabled (by the generation of periodic beacon frames) star networks, while they support multi-hop networking in mesh topologies, but with no synchronization. Even though both specifications mention the possible use of cluster-tree topologies, which combine multi-hop and synchronization features, the description on how to effectively construct such a network topology is missing. This paper tackles this issue by unveiling the ambiguities regarding the use of the cluster-tree topology and proposing a synchronization mechanism based on Time Division Beacon Scheduling (TDBS) to build cluster-tree WSNs. In addition, we propose a methodology for efficiently managing duty-cycles in every cluster, ensuring the fairest use of bandwidth resources. The feasibility of the TDBS mechanism is clearly demonstrated through an experimental test-bed based on our open-source implementation of the IEEE 802.15.4/ZigBee protocols.
Resumo:
Due to the growing complexity and adaptability requirements of real-time embedded systems, which often exhibit unrestricted inter-dependencies among supported services and user-imposed quality constraints, it is increasingly difficult to optimise the level of service of a dynamic task set within an useful and bounded time. This is even more difficult when intending to benefit from the full potential of an open distributed cooperating environment, where service characteristics are not known beforehand. This paper proposes an iterative refinement approach for a service’s QoS configuration taking into account services’ inter-dependencies and quality constraints, and trading off the achieved solution’s quality for the cost of computation. Extensive simulations demonstrate that the proposed anytime algorithm is able to quickly find a good initial solution and effectively optimises the rate at which the quality of the current solution improves as the algorithm is given more time to run. The added benefits of the proposed approach clearly surpass its reducedoverhead.
Resumo:
This paper studies static-priority preemptive scheduling on a multiprocessor using partitioned scheduling. We propose a new scheduling algorithm and prove that if the proposed algorithm is used and if less than 50% of the capacity is requested then all deadlines are met. It is known that for every static-priority multiprocessor scheduling algorithm, there is a task set that misses a deadline although the requested capacity is arbitrary close to 50%.
Resumo:
Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.
Resumo:
Task scheduling is one of the key mechanisms to ensure timeliness in embedded real-time systems. Such systems have often the need to execute not only application tasks but also some urgent routines (e.g. error-detection actions, consistency checkers, interrupt handlers) with minimum latency. Although fixed-priority schedulers such as Rate-Monotonic (RM) are in line with this need, they usually make a low processor utilization available to the system. Moreover, this availability usually decreases with the number of considered tasks. If dynamic-priority schedulers such as Earliest Deadline First (EDF) are applied instead, high system utilization can be guaranteed but the minimum latency for executing urgent routines may not be ensured. In this paper we describe a scheduling model according to which urgent routines are executed at the highest priority level and all other system tasks are scheduled by EDF. We show that the guaranteed processor utilization for the assumed scheduling model is at least as high as the one provided by RM for two tasks, namely 2(2√−1). Seven polynomial time tests for checking the system timeliness are derived and proved correct. The proposed tests are compared against each other and to an exact but exponential running time test.
Resumo:
Recent changes of paradigm in power systems opened the opportunity to the active participation of new players. The small and medium players gain new opportunities while participating in demand response programs. This paper explores the optimal resources scheduling in two distinct levels. First, the network operator facing large wind power variations makes use of real time pricing to induce consumers to meet wind power variations. Then, at the consumer level, each load is managed according to the consumer preferences. The two-level resources schedule has been implemented in a real-time simulation platform, which uses hardware for consumer’ loads control. The illustrative example includes a situation of large lack of wind power and focuses on a consumer with 18 loads.