186 resultados para deadline


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quality of Service (QoS) is a new issue in cloud-based MapReduce, which is a popular computation model for parallel and distributed processing of big data. QoS guarantee is challenging in a dynamical computation environment due to the fact that a fixed resource allocation may become under-provisioning, which leads to QoS violation, or over-provisioning, which increases unnecessary resource cost. This requires runtime resource scaling to adapt environmental changes for QoS guarantee. Aiming to guarantee the QoS, which is referred as to hard deadline in this work, this paper develops a theory to determine how and when resource is scaled up/down for cloud-based MapReduce. The theory employs a nonlinear transformation to define the problem in a reverse resource space, simplifying the theoretical analysis significantly. Then, theoretical results are presented in three theorems on sufficient conditions for guaranteeing the QoS of cloud-based MapReduce. The superiority and applications of the theory are demonstrated through case studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Real-time scheduling algorithms, such as Rate Monotonic and Earliest Deadline First, guarantee that calculations are performed within a pre-defined time. As many real-time systems operate on limited battery power, these algorithms have been enhanced with power-aware properties. In this thesis, 13 power-aware real-time scheduling algorithms for processor, device and system-level use are explored.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Utilization bounds for Earliest Deadline First(EDF) and Rate Monotonic(RM) scheduling are known and well understood for uniprocessor systems. In this paper, we derive limits on similar bounds for the multiprocessor case, when the individual processors need not be identical. Tasks are partitioned among the processors and RM scheduling is assumed to be the policy used in individual processors. A minimum limit on the bounds for a 'greedy' class of algorithms is given and proved, since the actual value of the bound depends on the algorithm that allocates the tasks. We also derive the utilization bound of an algorithm which allocates tasks in decreasing order of utilization factors. Knowledge of such bounds allows us to carry out very fast schedulability tests although we are constrained by the fact that the tests are sufficient but not necessary to ensure schedulability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A customer reported problem (or Trouble Ticket) in software maintenance is typically solved by one or more maintenance engineers. The decision of allocating the ticket to one or more engineers is generally taken by the lead, based on customer delivery deadlines and a guided complexity assessment from each maintenance engineer. The key challenge in such a scenario is two folds, un-truthful (hiked up) elicitation of ticket complexity by each engineer to the lead and the decision of allocating the ticket to a group of engineers who will solve the ticket with in customer deadline. The decision of allocation should ensure Individual and Coalitional Rationality along with Coalitional Stability. In this paper we use game theory to examine the issue of truthful elicitation of ticket complexities by engineers for solving ticket as a group given a specific customer delivery deadline. We formulate this problem as strategic form game and propose two mechanisms, (1) Division of Labor (DOL) and (2) Extended Second Price (ESP). In the proposed mechanisms we show that truth telling by each engineer constitutes a Dominant Strategy Nash Equilibrium of the underlying game. Also we analyze the existence of Individual Rationality (IR) and Coalitional Rationality (CR) properties to motivate voluntary and group participation. We use Core, solution concept from co-operative game theory to analyze the stability of the proposed group based on the allocation and payments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers a firm real-time M/M/1 system, where jobs have stochastic deadlines till the end of service. A method for approximately specifying the loss ratio of the earliest-deadline-first scheduling policy along with exit control through the early discarding technique is presented. This approximation uses the arrival rate and the mean relative deadline, normalized with respect to the mean service time, for exponential and uniform distributions of relative deadlines. Simulations show that the maximum approximation error is less than 4% and 2% for the two distributions, respectively, for a wide range of arrival rates and mean relative deadlines. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, we present an exact theoretical analysis of an system, with arbitrary distribution of relative deadline for the end of service, operated under the first come first served scheduling policy with exact admission control. We provide an explicit solution to the functional equation that must be satisfied by the workload distribution, when the system reaches steady state. We use this solution to derive explicit expressions for the loss ratio and the sojourn time distribution. Finally, we compare this loss ratio with that of a similar system operating without admission control, in the cases of some common distributions of the relative deadline.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information spreading in a population can be modeled as an epidemic. Campaigners (e.g., election campaign managers, companies marketing products or movies) are interested in spreading a message by a given deadline, using limited resources. In this paper, we formulate the above situation as an optimal control problem and the solution (using Pontryagin's Maximum Principle) prescribes an optimal resource allocation over the time of the campaign. We consider two different scenarios-in the first, the campaigner can adjust a direct control (over time) which allows her to recruit individuals from the population (at some cost) to act as spreaders for the Susceptible-Infected-Susceptible (SIS) epidemic model. In the second case, we allow the campaigner to adjust the effective spreading rate by incentivizing the infected in the Susceptible-Infected-Recovered (SIR) model, in addition to the direct recruitment. We consider time varying information spreading rate in our formulation to model the changing interest level of individuals in the campaign, as the deadline is reached. In both the cases, we show the existence of a solution and its uniqueness for sufficiently small campaign deadlines. For the fixed spreading rate, we show the effectiveness of the optimal control strategy against the constant control strategy, a heuristic control strategy and no control. We show the sensitivity of the optimal control to the spreading rate profile when it is time varying. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The correctness of a hard real-time system depends its ability to meet all its deadlines. Existing real-time systems use either a pure real-time scheduler or a real-time scheduler embedded as a real-time scheduling class in the scheduler of an operating system (OS). Existing implementations of schedulers in multicore systems that support real-time and non-real-time tasks, permit the execution of non-real-time tasks in all the cores with priorities lower than those of real-time tasks, but interrupts and softirqs associated with these non-real-time tasks can execute in any core with priorities higher than those of real-time tasks. As a result, the execution overhead of real-time tasks is quite large in these systems, which, in turn, affects their runtime. In order that the hard real-time tasks can be executed in such systems with minimal interference from other Linux tasks, we propose, in this paper, an integrated scheduler architecture, called SchedISA, which aims to considerably reduce the execution overhead of real-time tasks in these systems. In order to test the efficacy of the proposed scheduler, we implemented partitioned earliest deadline first (P-EDF) scheduling algorithm in SchedISA on Linux kernel, version 3.8, and conducted experiments on Intel core i7 processor with eight logical cores. We compared the execution overhead of real-time tasks in the above implementation of SchedISA with that in SCHED_DEADLINE's P-EDF implementation, which concurrently executes real-time and non-real-time tasks in Linux OS in all the cores. The experimental results show that the execution overhead of real-time tasks in the above implementation of SchedISA is considerably less than that in SCHED_DEADLINE. We believe that, with further refinement of SchedISA, the execution overhead of real-time tasks in SchedISA can be reduced to a predictable maximum, making it suitable for scheduling hard real-time tasks without affecting the CPU share of Linux tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider optimal average power allocation policies in a wireless channel in the presence of individual delay constraints on the transmitted packets. Power is consumed in transmission of data only. We consider the case when the power used in transmission is a linear function of the data transmitted. The transmission channel may experience multipath fading. We have developed a computationally efficient online algorithm, when there is same hard delay constraint for all packets. Later on, we generalize it to the case when there are multiple real time streams with different hard deadline constraints. Our algorithm uses linear programming and has very low complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EU]Lan erreforma indarrean sartu arte, hitzarmen kolektiboen negoziazioen porrotak, aurreko hitzarmenak indarraldia mantentzen jarraitzen zuela zekarren kontrako akordiorik egon ezean. Egun ordea, indarraldi hori urtebetera mugatu egiten da, eta hala, behin epe hori igarota eraginkortasuna galduko du, kontrako akordiorik ez badago, eta beraz, goragoko hitzarmena aplikatuko da (existitzen bada). Erreforma honek, hitzarmenen birnegoziazioa sustatzea eta lan baldintzen petrifikaziorik ez gertatzea du helburu. Baina, araudi berriak, interpretazio arazo ugari sortzen dituenez, lan honekin, eman daitezkeen erantzun posibleak landuko ditugu; eta auzitegiek zein irizpide jarraitu duten aztertuko da. Hitz gakoak: hitzarmen kolektiboa, ultraktibitatea, indarraldia, goragoko hitzarmen kolektiboa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES]El objetivo principal de este Trabajo Fin de Grado es automatizar un almacén con dos ejes lineales con la meta final de programar un panel de operador para que, ya sea un usuario u operario pueda introducir si desea depositar o recoger un palé, indicando la posición mediante un sistema de coordenadas las cuales dividen a las estanterías del almacén en filas y columnas. Para ello se cambiará parte del hardware de control, el cual incluye el antiguo módulo de posicionamiento (IP 246) que hasta ahora, controlaba el almacén. Esta tarjeta está limitada a un software ya desfasado, sin posibilidad de implementar un panel de operador con los requisitos demandados en la industria de hoy en día. También se ha utilizado una CPU S7-300, a diferencia de la SIMATIC S5 que se utilizaba con la antigua tarjeta.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta dissertação tem o propósito principal de fornecer evidências empíricas acerca dos fatores que influenciam as decisões dos gestores quanto ao prazo de divulgação das demonstrações contábeis anuais das companhias não financeiras listadas na BM&FBOVESPA. O prazo de divulgação, chamado defasagem, foi medido como o intervalo em dias entre o encerramento do exercício social e a data da primeira apresentação das Demonstrações Financeiras Padronizadas (DFPs). O foco da pesquisa foi a influência, sobre a defasagem, dos seguintes fatores não observáveis: monitoramento, complexidade contábil, governança corporativa, relatório de auditoria e performance. Com base na literatura revisada, foram formuladas proxies destinadas a captar os efeitos desses fatores. Para a consecução dos objetivos, foram estimados modelos econométricos por meio dos métodos: (i) Mínimos Quadrados Ordinários (MQO) com dados em corte transversal; (ii) MQO com dados agrupados (OLS pooled); e (iii) painel de dados. Os testes foram aplicados sobre um painel balanceado de dados, ou seja, 644 observações de 322 companhias, referentes aos exercícios 2010 e 2011. Os resultados das estimações revelaram que tendem a divulgar mais rapidamente suas demonstrações companhias: (i) com maior número de acionistas; (ii) com maior nível de endividamento; (iii) que aderiram a um entre os níveis diferenciados de governança corporativa da BM&FBOVESPA; (iv) que possuem maiores proporções de diretores independentes na composição da diretoria (board); e (v) que foram auditadas por uma entre as firmas de auditoria do grupo Big-4. Por outro lado, constatou-se que tendem a atrasar suas divulgações companhias que: (i) estão sujeitas à consolidação de balanços; (ii) tiveram suas demonstrações contábeis ressalvadas pelos auditores independentes; (iii) e que registraram resultados negativos (prejuízos). Adicionalmente, foram formuladas proxies para captar os efeitos das surpresas contidas nos resultados, uma delas tendo como base o benchmark para as expectativas do mercado, qual seja, a previsão dos analistas, no entanto, não foram constatados impactos das surpresas sobre o prazo de divulgação. Também não foram verificadas influências, sobre o timing, oriundas da proporção de investidores institucionais, da formação de blocos de controle, da regulação estatal, do nível de rentabilidade, do porte e tampouco da negociação de valores mobiliários em mercados estrangeiros. Os achados desta pesquisa podem contribuir não apenas para a literatura dedicada a essa linha de pesquisa, como também para investidores, analistas de mercado e reguladores. As nuances observadas para os exercícios analisados, que marcaram a adoção integral do padrão contábil alinhado às normas IFRS e a recuperação da economia brasileira em relação aos impactos da crise financeira mundial, permitiram relevantes constatações. Além disso, a relevância deste estudo é ampliada pelo ineditismo presente na aplicação de proxies ainda não utilizadas em ambiente nacional para explicar os prazos de divulgação.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] The main objective of this project is to analyze Cuban public health policy and the Millennium Development Goals, especially those linked to the issue of health, presenting their potential and strengths with a well-defined time horizon (2000-2015). The Millennium Development Goals are the international consensus on development and was signed as an international minimum agreement, with which began the century. The MDGs promote various goals and targets, with the corresponding monitoring indicators, which should be achieved by all countries for the present year. Health is an area that is at the center of the Millennium Development Goals, which reinforce each other to get a true human development itself. The research was done through theoretical frameworks of social production of health and disease, social justice and the power structure. A retrospective analysis of Cuban economic and social context is performed in order to study whether health-related MDGs are likely to be completed by the deadline on the island and likewise, the main parameters related to health compared with those of the neighboring countries in the Americas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho tem por objetivo apresentar a evolução da indústria do refino de petróleo no Brasil desde suas origens, sua evolução ao longo dos anos, explicitando as mudanças no perfil de produção, na matéria prima processada e na complexidade das nossas refinarias. Busca, também, apresentar os próximos passos para o refino de petróleo nacional, seus desafios face a produção de petróleos pesados e ácidos, bem como os impactos provocados pela necessidade de produção de derivados com especificações cada vez mais restritivas e com menor impacto ambiental. Optou-se pelo hidrorrefino como o primeiro grande passo para os próximos anos concluindo-se que unidades para o hidrotratamento de correntes intermediárias ou mesmo produto final assumirão um papel fundamental nos futuros esquemas de refino. Outra vertente importante analisada foi a necessidade de aumento de conversão, ressaltando-se que o caminho hoje escolhido de implantação de Unidades de Coqueamento Retardado se esgota no início da próxima década abrindo caminho para a tecnologia de hidroconversão de resíduo. Com relação à qualidade da gasolina e do óleo diesel foi apresentada uma proposta de esquema de refino para permitir o atendimento de especificações mais rígidas