885 resultados para Fixed-priority scheduling
Resumo:
This chapter presents some of the issues with holonic manufacturing systems. It starts by presenting the current manufacturing scenario and trends and then provides some background information on the holonic concept and its application to manufacturing. The current limitations and future trends of manufacturing suggest more autonomous and distributed organisations for manufacturing systems; holonic manufacturing systems are proposed as a way to achieve such autonomy and decentralisation. After a brief literature survey a specific research work is presented to handle scheduling in holonic manufacturing systems. This work is based on task and resource holons that cooperate with each other based on a variant of the contract net protocol that allow the propagation of constraints between operations in the execution plan. The chapter ends by presenting some challenges and future opportunities of research.
Resumo:
This paper describes a Multi-agent Scheduling System that assumes the existence of several Machines Agents (which are decision-making entities) distributed inside the Manufacturing System that interact and cooperate with other agents in order to obtain optimal or near-optimal global performances. Agents have to manage their internal behaviors and their relationships with other agents via cooperative negotiation in accordance with business policies defined by the user manager. Some Multi Agent Systems (MAS) organizational aspects are considered. An original Cooperation Mechanism for a Team-work based Architecture is proposed to address dynamic scheduling using Meta-Heuristics.
Resumo:
Copyright © 2014 Taylor & Francis.
Resumo:
There are several hazards in histopathology laboratories and its staff must ensure that their professional activity is set to the highest standards while complying with the best safety procedures. Formalin is one of the chemical hazards to which such professionals are routinely exposed. To decrease this contact, it is suggested that 10% neutral buffered liquid formalin (FL) is replaced by 10% formalin-gel (FG), given the later reduces the likelihood of spills and splashes, and decreased fume levels are released during its handling, proving itself less harmful. However, it is mandatory to assess the effectiveness of FG as a fixative and ensure that the subsequent complementary techniques, such as immunohistochemistry (IHC), are not compromised. Two groups of 30 samples from human placenta have been fixed with FG and FL fixatives during different periods of time (12, 24, and 48 hours) and, thereafter, processed, embedded, and sectioned. IHC for six different antibodies was performed and the results were scored (0–100) using an algorithm that took into account immunostaining intensity, percentage of staining structures, non-specific immunostaining, contrast, and morphological preservation. Parametric and non-parametric statistical tests were used (alpha = 0•05). All results were similar for both fixatives, with global score means of 95•36±6•65 for FL and 96•06±5•80 for FG, and without any statistical difference (P>0•05). The duration of the fixation had no statistical relevance also (P>0•05). So it is proved here FG could be an effective alternative to FL.
Resumo:
Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.
Resumo:
In the proposed model, the independent system operator (ISO) provides the opportunity for maintenance outage rescheduling of generating units before each short-term (ST) time interval. Long-term (LT) scheduling for 1 or 2 years in advance is essential for the ISO and the generation companies (GENCOs) to decide their LT strategies; however, it is not possible to be exactly followed and requires slight adjustments. The Cournot-Nash equilibrium is used to characterize the decision-making procedure of an individual GENCO for ST intervals considering the effective coordination with LT plans. Random inputs, such as parameters of the demand function of loads, hourly demand during the following ST time interval and the expected generation pattern of the rivals, are included as scenarios in the stochastic mixed integer program defined to model the payoff-maximizing objective of a GENCO. Scenario reduction algorithms are used to deal with the computational burden. Two reliability test systems were chosen to illustrate the effectiveness of the proposed model for the ST decision-making process for future planned outages from the point of view of a GENCO.
Resumo:
This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding he management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.
Resumo:
LLF (Least Laxity First) scheduling, which assigns a higher priority to a task with a smaller laxity, has been known as an optimal preemptive scheduling algorithm on a single processor platform. However, little work has been made to illuminate its characteristics upon multiprocessor platforms. In this paper, we identify the dynamics of laxity from the system’s viewpoint and translate the dynamics into LLF multiprocessor schedulability analysis. More specifically, we first characterize laxity properties under LLF scheduling, focusing on laxity dynamics associated with a deadline miss. These laxity dynamics describe a lower bound, which leads to the deadline miss, on the number of tasks of certain laxity values at certain time instants. This lower bound is significant because it represents invariants for highly dynamic system parameters (laxity values). Since the laxity of a task is dependent of the amount of interference of higher-priority tasks, we can then derive a set of conditions to check whether a given task system can go into the laxity dynamics towards a deadline miss. This way, to the author’s best knowledge, we propose the first LLF multiprocessor schedulability test based on its own laxity properties. We also develop an improved schedulability test that exploits slack values. We mathematically prove that the proposed LLF tests dominate the state-of-the-art EDZL tests. We also present simulation results to evaluate schedulability performance of both the original and improved LLF tests in a quantitative manner.
Resumo:
Technological developments are pulling fieldbus networks to support a new wide class of applications, such as industrial multimedia applications. To enable its use in this kind of applications the TCP/IP suite of protocols can be integrated within a fieldbus stack, leading to a dual-stack approach that is briefly outlined in the paper. One important requirement that must be fulfilled by this approach is that the hard real-time guarantees provided to the control-related traffic ("native" fieldbus traffic) are kept. At the same time it must also provide the desired quality of service (QoS) to IP applications. The focus of the paper is on how, in such a dual-stack approach, QoS can be efficiently provided to IP applications requiring quasi-constant bandwidth.
Resumo:
Dynamic parallel scheduling using work-stealing has gained popularity in academia and industry for its good performance, ease of implementation and theoretical bounds on space and time. Cores treat their own double-ended queues (deques) as a stack, pushing and popping threads from the bottom, but treat the deque of another randomly selected busy core as a queue, stealing threads only from the top, whenever they are idle. However, this standard approach cannot be directly applied to real-time systems, where the importance of parallelising tasks is increasing due to the limitations of multiprocessor scheduling theory regarding parallelism. Using one deque per core is obviously a source of priority inversion since high priority tasks may eventually be enqueued after lower priority tasks, possibly leading to deadline misses as in this case the lower priority tasks are the candidates when a stealing operation occurs. Our proposal is to replace the single non-priority deque of work-stealing with ordered per-processor priority deques of ready threads. The scheduling algorithm starts with a single deque per-core, but unlike traditional work-stealing, the total number of deques in the system may now exceed the number of processors. Instead of stealing randomly, cores steal from the highest priority deque.
Resumo:
In real-time systems, there are two distinct trends for scheduling task sets on unicore systems: non-preemptive and preemptive scheduling. Non-preemptive scheduling is obviously not subject to any preemption delay but its schedulability may be quite poor, whereas fully preemptive scheduling is subject to preemption delay, but benefits from a higher flexibility in the scheduling decisions. The time-delay involved by task preemptions is a major source of pessimism in the analysis of the task Worst-Case Execution Time (WCET) in real-time systems. Preemptive scheduling policies including non-preemptive regions are a hybrid solution between non-preemptive and fully preemptive scheduling paradigms, which enables to conjugate both world's benefits. In this paper, we exploit the connection between the progression of a task in its operations, and the knowledge of the preemption delays as a function of its progression. The pessimism in the preemption delay estimation is then reduced in comparison to state of the art methods, due to the increase in information available in the analysis.
Resumo:
Known algorithms capable of scheduling implicit-deadline sporadic tasks over identical processors at up to 100% utilisation invariably involve numerous preemptions and migrations. To the challenge of devising a scheduling scheme with as few preemptions and migrations as possible, for a given guaranteed utilisation bound, we respond with the algorithm NPS-F. It is configurable with a parameter, trading off guaranteed schedulable utilisation (up to 100%) vs preemptions. For any possible configuration, NPS-F introduces fewer preemptions than any other known algorithm matching its utilisation bound. A clustered variant of the algorithm, for systems made of multicore chips, eliminates (costly) off-chip task migrations, by dividing processors into disjoint clusters, formed by cores on the same chip (with the cluster size being a parameter). Clusters are independently scheduled (each, using non-clustered NPS-F). The utilisation bound is only moderately affected. We also formulate an important extension (applicable to both clustered and non-clustered NPS-F) which optimises the supply of processing time to executing tasks and makes it more granular. This reduces processing capacity requirements for schedulability without increasing preemptions.
Resumo:
A large part of power dissipation in a system is generated by I/O devices. Increasingly these devices provide power saving mechanisms, inter alia to enhance battery life. While I/O device scheduling has been studied in the past for realtime systems, the use of energy resources by these scheduling algorithms may be improved. These approaches are crafted considering a very large overhead of device transitions. Technology enhancements have allowed the hardware vendors to reduce the device transition overhead and energy consumption. We propose an intra-task device scheduling algorithm for real time systems that allows to shut-down devices while ensuring system schedulability. Our results show an energy gain of up to 90% when compared to the techniques proposed in the state-of-the-art.
Resumo:
Significant research efforts are being devoted to Body Area Networks (BAN) due to their potential for revolutionizing healthcare practices. Energy-efficiency and communication reliability are critically important for these networks. In an experimental study with three different mote platforms, we show that changes in human body shadowing as well as those in the relative distance and orientation of nodes caused by the common human body movements can result in significant fluctuations in the received signal strength within a BAN. Furthermore, regular movements, such as walking, typically manifest in approximately periodic variations in signal strength. We present an algorithm that predicts the signal strength peaks and evaluate it on real-world data. We present the design of an opportunistic MAC protocol, named BANMAC, that takes advantage of the periodic fluctuations of the signal strength to achieve high reliability even with low transmission power.
Resumo:
Composition is a practice of key importance in software engineering. When real-time applications are composed it is necessary that their timing properties (such as meeting the deadlines) are guaranteed. The composition is performed by establishing an interface between the application and the physical platform. Such an interface does typically contain information about the amount of computing capacity needed by the application. In multiprocessor platforms, the interface should also present information about the degree of parallelism. Recently there have been quite a few interface proposals. However, they are either too complex to be handled or too pessimistic.In this paper we propose the Generalized Multiprocessor Periodic Resource model (GMPR) that is strictly superior to the MPR model without requiring a too detailed description. We describe a method to generate the interface from the application specification. All these methods have been implemented in Matlab routines that are publicly available.