990 resultados para time priority
Resumo:
We analyze the impact of a minimum price variation (tick) and timepriority on the dynamics of quotes and the trading costs when competitionfor the order flow is dynamic. We find that convergence to competitiveoutcomes can take time and that the speed of convergence is influencedby the tick size, the priority rule and the characteristics of the orderarrival process. We show also that a zero minimum price variation is neveroptimal when competition for the order flow is dynamic. We compare thetrading outcomes with and without time priority. Time priority is shownto guarantee that uncompetitive spreads cannot be sustained over time.However it can sometimes result in higher trading costs. Empiricalimplications are proposed. In particular, we relate the size of thetrading costs to the frequency of new offers and the dynamics of theinside spread to the state of the book.
Resumo:
This paper analyzes a communication network facing users with a continuous distribution of delay cost per unit time. Priority queueing is often used as a way to provide differential services for users with different delay sensitivities. Delay is a key dimension of network service quality, so priority is a valuable resource which is limited and should to be optimally allocated. We investigate the allocation of priority in queues via a simple bidding mechanism. In our mechanism, arriving users can decide not to enter the network at all or submit an announced delay sensitive value. User entering the network obtains priority over all users who make lower bids, and is charged by a payment function which is designed following an exclusion compensation principle. The payment function is proved to be incentive compatible, so the equilibrium bidding behavior leads to the implementation of "cµ-rule". Social warfare or revenue maximizing by appropriately setting the reserve payment is also analyzed.
Resumo:
P-NET is a multi-master fieldbus standard based on a virtual token passing scheme. In P-NET each master is allowed to transmit only one message per token visit. In the worst-case, the communication response time can be derived considering that, in each token cycle, all stations use the token to transmit a message. In this paper, we define a more sophisticated P-NET model, which considers the actual token utilisation. We then analyse the possibility of implementing a local priority-based scheduling policy to improve the real-time behaviour of P-NET.
Resumo:
Real-time scheduling usually considers worst-case values for the parameters of task (or message stream) sets, in order to provide safe schedulability tests for hard real-time systems. However, worst-case conditions introduce a level of pessimism that is often inadequate for a certain class of (soft) real-time systems. In this paper we provide an approach for computing the stochastic response time of tasks where tasks have inter-arrival times described by discrete probabilistic distribution functions, instead of minimum inter-arrival (MIT) values.
Resumo:
Demo presented in 12th Workshop on Models and Algorithms for Planning and Scheduling Problems (MAPSP 2015). 8 to 12, Jun, 2015. La Roche-en-Ardenne, Belgium. Extended abstract.
Resumo:
In this paper, we propose the Distributed using Optimal Priority Assignment (DOPA) heuristic that finds a feasible partitioning and priority assignment for distributed applications based on the linear transactional model. DOPA partitions the tasks and messages in the distributed system, and makes use of the Optimal Priority Assignment (OPA) algorithm known as Audsley’s algorithm, to find the priorities for that partition. The experimental results show how the use of the OPA algorithm increases in average the number of schedulable tasks and messages in a distributed system when compared to the use of Deadline Monotonic (DM) usually favoured in other works. Afterwards, we extend these results to the assignment of Parallel/Distributed applications and present a second heuristic named Parallel-DOPA (P-DOPA). In that case, we show how the partitioning process can be simplified by using the Distributed Stretch Transformation (DST), a parallel transaction transformation algorithm introduced in [1].
Resumo:
In presenting their priorities for the new European Commission, Miroslav Beblavý and Ilaria Maselli assert in this CEPS Commentary that the time has come to devise an EU-level shock absorption mechanism. In their view, the instrument that best aligns varying political and economic objectives is a form of reinsurance of national systems of unemployment insurance. The primary motivation for the reinsurance proposal is that it can have a substantial stabilising effect, especially in case of large shocks, and, at the same time, be politically realistic in terms of contributions, costs and administrative burdens.
Resumo:
Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.
Resumo:
Profibus networks are widely used as the communication infrastructure for supporting distributed computer-controlled applications. Most of the times, these applications impose strict real-time requirements. Profibus-DP has gradually become the preferred Profibus application profile. It is usually implemented as a mono-master Profibus network, and is optimised for speed and efficiency. The aim of this paper is to analyse the real-time behaviour of this class of Profibus networks. Importantly, we develop a new methodology for evaluating the worst-case message response time in systems where high-priority and cyclic low-priority Profibus traffic coexist. The proposed analysis constitutes a powerful tool to guarantee prior to runtime the real-time behaviour of a distributed computer-controlled system based on a Profibus network, where the realtime traffic is supported either by high-priority or by cyclic poll Profibus messages.
Resumo:
Controller Area Network (CAN) is a fieldbus network suitable for small-scale Distributed Computer Controlled Systems, being appropriate for transferring short real-time messages. Nevertheless, it must be understood that the continuity of service is not fully guaranteed, since it may be disturbed by temporary periods of network inaccessibility [1]. In this paper, such temporary periods of network inaccessibility are integrated in the response time analysis of CAN networks. The achieved results emphasise that, in the presence of temporary periods of network inaccessibility, a CAN network is not able to provide different integrity levels to the supported applications, since errors in low priority messages interfere with the response time of higher priority message streams.
Resumo:
In this paper we address the real-time capabilities of P-NET, which is a multi-master fieldbus standard based on a virtual token passing scheme. We show how P-NET’s medium access control (MAC) protocol is able to guarantee a bounded access time to message requests. We then propose a model for implementing fixed prioritybased dispatching mechanisms at each master’s application level. In this way, we diminish the impact of the first-come-first-served (FCFS) policy that P-NET uses at the data link layer. The proposed model rises several issues well known within the real-time systems community: message release jitter; pre-run-time schedulability analysis in non pre-emptive contexts; non-independence of tasks at the application level. We identify these issues in the proposed model and show how results available for priority-based task dispatching can be adapted to encompass priority-based message dispatching in P-NET networks.
Resumo:
This paper provides a comprehensive study on how to use Profibus fieldbus networks to support real-time industrial communications, that is, on how to ensure the transmission of real-time messages within a maximum bound time. Profibus is base on a simplified timed token (TT) protocol, which is a well-proved solution for real-time communication systems. However, Profibus differs with respect to the TT protocol, thus preventing the application of the usual TT protocol real-time analysis. In fact, real-time solutions for networks based on the TT protocol rely on the possibility of allocating specific bandwidth for the real-time traffic. This means that a minimum amount of time is always available, at each token visit, to transmit real-time messages, transversely, with the Profibus protocol, in the worst case, only one real-time message is processed per token visit. The authors propose two approaches to guarantee the real-time behavior of the Profibus protocol: (1) an unconstrained low-priority traffic profile; and (2) a constrained low-priority traffic profile. The proposed analysis shows that the first profile is a suitable approach for more responsive systems (tighter deadlines), while the second allows for increased nonreal-time traffic throughput
Resumo:
In this paper we address the P-NET Medium Access Control (MAC) ability to schedule traffic according to its real-time requirements, in order to support real-time distributed applications. We provide a schedulability analysis based on the P-NET standard, and propose mechanisms to overcome priority inversion problems resulting from the use of FIFO outgoing buffers
Resumo:
The paper provides a comprehensive study on how to use Profibus networks to support real time communications, that is, ensuring the transmission of the real time messages before their deadlines. Profibus is based on a simplified Timed Token (TT) protocol, which is a well proved solution for real time communication systems. However, Profibus differences from the TT protocol prevent the application of the usual TT analysis. The main reason is that, conversely to the TT protocol, in the worst case, only one high priority message is processed per token visit. The major contribution of the paper is to prove that, despite this shortcoming, it is possible to guarantee communication real time behaviour with the Profibus protocol
Resumo:
WiDom is a wireless prioritized medium access control protocol which offers very large number of priority levels. Hence, it brings the potential to employ non-preemptive static-priority scheduling and schedulability analysis for a wireless channel assuming that the overhead of WiDom is modeled properly. Recent research has created a new version of WiDom (we call it: Slotted WiDom) which offers lower overhead compared to the previous version. In this paper we propose a new schedulability analysis for slotted WiDom and extend it to work for message streams with release jitter. Furthermore, to provide an accurate timing analysis, we must include the effect of transmission faults on message latencies. Thus, in the proposed analysis we consider the existence of different noise sources and develop the analysis for the case where messages are transmitted under noisy wireless channels. Evaluation of the proposed analysis is done by testing the slotted WiDom in two different modes on a real test-bed. The results from the experiments provide a firm validation on our findings.