836 resultados para Overhead conductors


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tese apresentada para o cumprimento dos requisitos necessários à obtenção do grau de Doutor no ramo de Ciências Musicais

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A evolução tecnológica a que se assistiu nas últimas décadas tem sido um dos principais fatores da necessidade de utilização de energia elétrica. Consequentemente, remodelações tanto a nível de equipamentos na rede de distribuição como da qualidade de serviço têm sido determinantes para fazer face a uma evolução crescente do consumo de energia elétrica por parte da sociedade. Esta evolução tem afetado significativamente o desempenho dos sistemas de proteção e automatismos associados, sendo estes cada vez mais fiáveis e proporcionando maior segurança tanto aos ativos da rede como a pessoas. No entanto, esta evolução tem tido também impacto nas linhas de distribuição em média tensão, estando estas a evoluir de rede aérea para linhas mistas de distribuição. As linhas mistas de distribuição têm diferentes desempenhos técnicos e económicos no que respeita aos sistemas de proteção e respetivos automatismos de religação automática. Neste trabalho são analisados alguns parâmetros que têm influência na atuação das proteções e respetivos automatismos de religação.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Controller area network (CAN) is a fieldbus network suitable for small-scale distributed computer controlled systems (DCCS), being appropriate for sending and receiving short real-time messages at speeds up to 1 Mbit/sec. Several studies are available on how to guarantee the real-time requirements of CAN messages, providing preruntime schedulability conditions to guarantee the real-time communication requirements of DCCS traffic. Usually, it is considered that CAN guarantees atomic multicast properties by means of its extensive error detection/signaling mechanisms. However, there are some error situations where messages can be delivered in duplicate or delivered only by a subset of the receivers, leading to inconsistencies in the supported applications. In order to prevent such inconsistencies, a middleware for reliable communication in CAN is proposed, taking advantage of CAN synchronous properties to minimize the runtime overhead. Such middleware comprises a set of atomic multicast and consolidation protocols, upon which the reliable communication properties are guaranteed. The related timing analysis demonstrates that, in spite of the extra stack of protocols, the real-time properties of CAN are preserved since the predictability of message transfer is guaranteed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the past few years Tabling has emerged as a powerful logic programming model. The integration of concurrent features into the implementation of Tabling systems is demanded by need to use recently developed tabling applications within distributed systems, where a process has to respond concurrently to several requests. The support for sharing of tables among the concurrent threads of a Tabling process is a desirable feature, to allow one of Tabling’s virtues, the re-use of computations by other threads and to allow efficient usage of available memory. However, the incremental completion of tables which are evaluated concurrently is not a trivial problem. In this dissertation we describe the integration of concurrency mechanisms, by the way of multi-threading, in a state of the art Tabling and Prolog system, XSB. We begin by reviewing the main concepts for a formal description of tabled computations, called SLG resolution and for the implementation of Tabling under the SLG-WAM, the abstract machine supported by XSB. We describe the different scheduling strategies provided by XSB and introduce some new properties of local scheduling, a scheduling strategy for SLG resolution. We proceed to describe our implementation work by describing the process of integrating multi-threading in a Prolog system supporting Tabling, without addressing the problem of shared tables. We describe the trade-offs and implementation decisions involved. We then describe an optimistic algorithm for the concurrent sharing of completed tables, Shared Completed Tables, which allows the sharing of tables without incurring in deadlocks, under local scheduling. This method relies on the execution properties of local scheduling and includes full support for negation. We provide a theoretical framework and discuss the implementation’s correctness and complexity. After that, we describe amethod for the sharing of tables among threads that allows parallelism in the computation of inter-dependent subgoals, which we name Concurrent Completion. We informally argue for the correctness of Concurrent Completion. We give detailed performance measurements of the multi-threaded XSB systems over a variety of machines and operating systems, for both the Shared Completed Tables and the Concurrent Completion implementations. We focus our measurements inthe overhead over the sequential engine and the scalability of the system. We finish with a comparison of XSB with other multi-threaded Prolog systems and we compare our approach to concurrent tabling with parallel and distributed methods for the evaluation of tabling. Finally, we identify future research directions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

WiDom is a wireless prioritized medium access control protocol which offers very large number of priority levels. Hence, it brings the potential to employ non-preemptive static-priority scheduling and schedulability analysis for a wireless channel assuming that the overhead of WiDom is modeled properly. Recent research has created a new version of WiDom (we call it: Slotted WiDom) which offers lower overhead compared to the previous version. In this paper we propose a new schedulability analysis for slotted WiDom and extend it to work for message streams with release jitter. Furthermore, to provide an accurate timing analysis, we must include the effect of transmission faults on message latencies. Thus, in the proposed analysis we consider the existence of different noise sources and develop the analysis for the case where messages are transmitted under noisy wireless channels. Evaluation of the proposed analysis is done by testing the slotted WiDom in two different modes on a real test-bed. The results from the experiments provide a firm validation on our findings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a global multiprocessor scheduling algorithm for the Linux kernel that combines the global EDF scheduler with a priority-aware work-stealing load balancing scheme, enabling parallel real-time tasks to be executed on more than one processor at a given time instant. We state that some priority inversion may actually be acceptable, provided it helps reduce contention, communication, synchronisation and coordination between parallel threads, while still guaranteeing the expected system’s predictability. Experimental results demonstrate the low scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A large part of power dissipation in a system is generated by I/O devices. Increasingly these devices provide power saving mechanisms to inter alia enhance battery life. While I/O device scheduling has been studied in the past for realtime systems, the use of energy resources by these scheduling algorithms may be improved. These approaches are crafted considering a huge overhead of device transition. The technology enhancement has allowed the hardware vendors to reduce the device transition overhead and energy consumption. We propose an intra-task device scheduling algorithm for real time systems that allows to shut-down devices while ensuring the system schedulability. Our results show an energy gain of up to 90% in the best case when compared to the state-of-the-art.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High-level parallel languages offer a simple way for application programmers to specify parallelism in a form that easily scales with problem size, leaving the scheduling of the tasks onto processors to be performed at runtime. Therefore, if the underlying system cannot efficiently execute those applications on the available cores, the benefits will be lost. In this paper, we consider how to schedule highly heterogenous parallel applications that require real-time performance guarantees on multicore processors. The paper proposes a novel scheduling approach that combines the global Earliest Deadline First (EDF) scheduler with a priority-aware work-stealing load balancing scheme, which enables parallel realtime tasks to be executed on more than one processor at a given time instant. Experimental results demonstrate the better scalability and lower scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A large part of power dissipation in a system is generated by I/O devices. Increasingly these devices provide power saving mechanisms, inter alia to enhance battery life. While I/O device scheduling has been studied in the past for realtime systems, the use of energy resources by these scheduling algorithms may be improved. These approaches are crafted considering a very large overhead of device transitions. Technology enhancements have allowed the hardware vendors to reduce the device transition overhead and energy consumption. We propose an intra-task device scheduling algorithm for real time systems that allows to shut-down devices while ensuring system schedulability. Our results show an energy gain of up to 90% when compared to the techniques proposed in the state-of-the-art.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSN) are being used for a number of applications involving infrastructure monitoring, building energy monitoring and industrial sensing. The difficulty of programming individual sensor nodes and the associated overhead have encouraged researchers to design macro-programming systems which can help program the network as a whole or as a combination of subnets. Most of the current macro-programming schemes do not support multiple users seamlessly deploying diverse applications on the same shared sensor network. As WSNs are becoming more common, it is important to provide such support, since it enables higher-level optimizations such as code reuse, energy savings, and traffic reduction. In this paper, we propose a macro-programming framework called Nano-CF, which, in addition to supporting in-network programming, allows multiple applications written by different programmers to be executed simultaneously on a sensor networking infrastructure. This framework enables the use of a common sensing infrastructure for a number of applications without the users having to worrying about the applications already deployed on the network. The framework also supports timing constraints and resource reservations using the Nano-RK operating system. Nano- CF is efficient at improving WSN performance by (a) combining multiple user programs, (b) aggregating packets for data delivery, and (c) satisfying timing and energy specifications using Rate- Harmonized Scheduling. Using representative applications, we demonstrate that Nano-CF achieves 90% reduction in Source Lines-of-Code (SLoC) and 50% energy savings from aggregated data delivery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Radio interference drastically affects the performance of sensor-net communications, leading to packet loss and reduced energy-efficiency. As an increasing number of wireless devices operates on the same ISM frequencies, there is a strong need for understanding and debugging the performance of existing sensornet protocols under interference. Doing so requires a low-cost flexible testbed infrastructure that allows the repeatable generation of a wide range of interference patterns. Unfortunately, to date, existing sensornet testbeds lack such capabilities, and do not permit to study easily the coexistence problems between devices sharing the same frequencies. This paper addresses the current lack of such an infrastructure by using off-the-shelf sensor motes to record and playback interference patterns as well as to generate customizable and repeat-able interference in real-time. We propose and develop JamLab: a low-cost infrastructure to augment existing sensornet testbeds with accurate interference generation while limiting the overhead to a simple upload of the appropriate software. We explain how we tackle the hardware limitations and get an accurate measurement and regeneration of interference, and we experimentally evaluate the accuracy of JamLab with respect to time, space, and intensity. We further use JamLab to characterize the impact of interference on sensornet MAC protocols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Preemptions account for a non-negligible overhead during system execution. There has been substantial amount of research on estimating the delay incurred due to the loss of working sets in the processor state (caches, registers, TLBs) and some on avoiding preemptions, or limiting the preemption cost. We present an algorithm to reduce preemptions by further delaying the start of execution of high priority tasks in fixed priority scheduling. Our approaches take advantage of the floating non-preemptive regions model and exploit the fact that, during the schedule, the relative task phasing will differ from the worst-case scenario in terms of admissible preemption deferral. Furthermore, approximations to reduce the complexity of the proposed approach are presented. Substantial set of experiments demonstrate that the approach and approximations improve over existing work, in particular for the case of high utilisation systems, where savings of up to 22% on the number of preemption are attained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

WiDom is a wireless prioritized medium access control protocol which offers a very large number of priority levels. Hence, it brings the potential to employ non-preemptive static-priority scheduling and schedulability analysis for a wireless channel assuming that the overhead of WiDom is modeled properly. One schedulability analysis for WiDom has already been proposed but recent research has created a new version of WiDom (we call it: Slotted WiDom) with lower overhead and for this version of WiDom no schedulability analysis exists. In this paper we propose a new schedulability analysis for slotted WiDom and extend it to work also for message streams with release jitter. We have performed experiments with an implementation of slotted WiDom on a real-world platform (MicaZ). We find that for each message stream, the maximum observed response time never exceeds the calculated response time and hence this corroborates our belief that our new scheduling theory is applicable in practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With progressing CMOS technology miniaturization, the leakage power consumption starts to dominate the dynamic power consumption. The recent technology trends have equipped the modern embedded processors with the several sleep states and reduced their overhead (energy/time) of the sleep transition. The dynamic voltage frequency scaling (DVFS) potential to save energy is diminishing due to efficient (low overhead) sleep states and increased static (leakage) power consumption. The state-of-the-art research on static power reduction at system level is based on assumptions that cannot easily be integrated into practical systems. We propose a novel enhanced race-to-halt approach (ERTH) to reduce the overall system energy consumption. The exhaustive simulations demonstrate the effectiveness of our approach showing an improvement of up to 8 % over an existing work.