173 resultados para deadline
Resumo:
We present Dithen, a novel computation-as-a-service (CaaS) cloud platform specifically tailored to the parallel ex-ecution of large-scale multimedia tasks. Dithen handles the upload/download of both multimedia data and executable items, the assignment of compute units to multimedia workloads, and the reactive control of the available compute units to minimize the cloud infrastructure cost under deadline-abiding execution. Dithen combines three key properties: (i) the reactive assignment of individual multimedia tasks to available computing units according to availability and predetermined time-to-completion constraints; (ii) optimal resource estimation based on Kalman-filter estimates; (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of units servicing workloads. The deployment of Dithen over Amazon EC2 spot instances is shown to be capable of processing more than 80,000 video transcoding, face detection and image processing tasks (equivalent to the processing of more than 116 GB of compressed data) for less than $1 in billing cost from EC2. Moreover, the proposed AIMD-based control mechanism, in conjunction with the Kalman estimates, is shown to provide for more than 27% reduction in EC2 spot instance cost against methods based on reactive resource estimation. Finally, Dithen is shown to offer a 38% to 500% reduction of the billing cost against the current state-of-the-art in CaaS platforms on Amazon EC2 (Amazon Lambda and Amazon Autoscale). A baseline version of Dithen is currently available at dithen.com.
Resumo:
Datacenters have emerged as the dominant form of computing infrastructure over the last two decades. The tremendous increase in the requirements of data analysis has led to a proportional increase in power consumption and datacenters are now one of the fastest growing electricity consumers in the United States. Another rising concern is the loss of throughput due to network congestion. Scheduling models that do not explicitly account for data placement may lead to a transfer of large amounts of data over the network causing unacceptable delays. In this dissertation, we study different scheduling models that are inspired by the dual objectives of minimizing energy costs and network congestion in a datacenter. As datacenters are equipped to handle peak workloads, the average server utilization in most datacenters is very low. As a result, one can achieve huge energy savings by selectively shutting down machines when demand is low. In this dissertation, we introduce the network-aware machine activation problem to find a schedule that simultaneously minimizes the number of machines necessary and the congestion incurred in the network. Our model significantly generalizes well-studied combinatorial optimization problems such as hard-capacitated hypergraph covering and is thus strongly NP-hard. As a result, we focus on finding good approximation algorithms. Data-parallel computation frameworks such as MapReduce have popularized the design of applications that require a large amount of communication between different machines. Efficient scheduling of these communication demands is essential to guarantee efficient execution of the different applications. In the second part of the thesis, we study the approximability of the co-flow scheduling problem that has been recently introduced to capture these application-level demands. Finally, we also study the question, "In what order should one process jobs?'' Often, precedence constraints specify a partial order over the set of jobs and the objective is to find suitable schedules that satisfy the partial order. However, in the presence of hard deadline constraints, it may be impossible to find a schedule that satisfies all precedence constraints. In this thesis we formalize different variants of job scheduling with soft precedence constraints and conduct the first systematic study of these problems.
Resumo:
The intensive character in knowledge of software production and its rising demand suggest the need to establish mechanisms to properly manage the knowledge involved in order to meet the requirements of deadline, costs and quality. The knowledge capitalization is a process that involves from identification to evaluation of the knowledge produced and used. Specifically, for software development, capitalization enables easier access, minimize the loss of knowledge, reducing the learning curve, avoid repeating errors and rework. Thus, this thesis presents the know-Cap, a method developed to organize and guide the capitalization of knowledge in software development. The Know-Cap facilitates the location, preservation, value addition and updating of knowledge, in order to use it in the execution of new tasks. The method was proposed from a set of methodological procedures: literature review, systematic review and analysis of related work. The feasibility and appropriateness of Know-Cap were analyzed from an application study, conducted in a real case, and an analytical study of software development companies. The results obtained indicate the Know- Cap supports the capitalization of knowledge in software development.
Resumo:
The ICES Working Group for the Bay of Biscay and the Iberic waters Ecoregion (WGBIE) met in Copenhagen, Denmark during 13–14 May 2016. There were 22 stocks in its remit distributed from ICES Divisions 3.a–4.a though mostly distributed in Sub Areas 7, 8 and 9. There were 21 participants, some of whom joined the meeting re-motely. The group was tasked with conducting assessments of stock status for 22 stocks using analytical, forecast methods or trends indicators to provide catch forecasts for eight stocks and provide a first draft of the ICES advice for 2016 for fourteen stocks. For the remaining stocks, the group had to update catch information and indices of abundance where needed. Depending on the result of this update, namely if it would change the perception of the stock, the working group drafted new advice. Analytical assessments using age-structured models were conducted for the northern and southern stocks of megrim and the Bay of Biscay sole. The two hake stocks and one southern stock of anglerfish were assessed using models that allow the use of only length-structured data (no age data). A surplus-production model, without age or length structure, was used to assess the second southern stocks of anglerfish. No ana-lytical assessments have been provided for the northern stocks of anglerfish after 2006. This is mostly due to ageing problems and to an increase in discards in recent years, for which there is no reliable data at the stock level. The state of stocks for which no analytical assessment could be performed was inferred from examination of commer-cial LPUE or CPUE data and from survey information. Three nephrops stocks from the Bay of Biscay and the Iberian waters are scheduled for benchmark assessments in October 2016. The WGBIE meeting spent some time review-ing the progress towards the benchmark (see Annex 6) together with longer term benchmarks (2017 and after, see section 1.) for sea bass in the Bay of Biscay, all an-glerfish and hake stocks assessed by the WG. For the northern megrim stock, the sched-ule an inter-benchmark meeting was completed successfully and the group reviewed the outcome and accepted the category 1 update assessment. A recurrent issue significantly constrained the group’s ability to address the terms of reference this year. Despite an ICES data call with a deadline of six weeks before the meeting, data for several stocks were resubmitted during the meeting which lead to increased workloads during the working group, as in that case, the assessments could not be carried out in National Laboratories prior to the meeting as mentioned in the ToRs. This is an important matter of concerns for the group members. Section 1 of the report presents a summary by stock and discusses general issues. Sec-tion 2 provides descriptions of the relevant fishing fleets and surveys used in the as-sessment of the stocks. Sections 3–18 contains the single stock assessments.
Resumo:
Shows the prison with wooden fence, 18 guard towers, the famous "deadline," the north and south gates, Sweetwater Creek, "Valley of Death," fortification, batteries, and cook house. He depicts overcrowding by a blizzard of tiny dots everywhere, writing the dots stand for "Union soldiers."
Resumo:
This flyer promotes a call for applications from Graduate Students for the Eliana Rivero Research Scholarship in Cuban Studies. The scholarship provides one graduate student the opportunity to conduct research in Cuban studies- with special emphasis in the humanities- at the Cuban Research Institute. The deadline for applications is February 15, 2016.
Resumo:
A rotulagem é uma forma legal que assegura a defesa e a proteção do consumidor. O rótulo presta todas as informações necessárias e importantes que levam à decisão desse consumidor, consoante as suas necessidades sejam elas de saúde ou nutricionais, por um ou outro género alimentício. Além de favorecer um correto armazenamento, preparação e consumo dos alimentos (aumentando a segurança alimentar). É de elevada importância que a legislação relativa à rotulagem seja atualizada, acompanhando a evolução e as exigências da sociedade. Neste sentido, foi pulicado o Regulamento (UE) n.º 1169/2011 relativo à informação aos consumidores sobre os géneros alimentícios que veio alterar os regulamentos (CE) n.º1924/2006 e (CE) n.º1925/2006 do Parlamento Europeu e do Conselho e revoga as Diretivas 87/250/CEE da Comissão, 90/496/CEE do Conselho, 1999/10/CE da Comissão, 2000/13/CEE do Parlamento Europeu e do Conselho, 2002/67/CE da Comissão e o Regulamento (CE) n.º 608/2004 da Comissão. Este regulamento teve como principais objetivos atualizar e consolidar a legislação sobre rotulagem geral e nutricional, eliminar as incoerências entre diferentes atos legislativos, facilitar a livre circulação dos géneros alimentícios entre Estados Membros e clarificar as responsabilidades dos diferentes intervenientes da cadeia alimentar. A adoção deste novo regulamento consistiu num marco importante na legislação relativa à rotulagem, implicando novas obrigações para as empresas. Foi neste contexto que se desenvolveu este estágio, num curto espaço de tempo alterou-se todos os rótulos “marca Auchan” para permanecerem em conformidade com a lei. A adaptação foi efetuada em cerca de 1000 rótulos, de vários setores da área alimentar, nomeadamente leite e derivados, carne, produtos da pesca, ovos, etc. Todo o trabalho ficou concluído ao final de 1 ano e 4 meses, sendo que o deadline estipulado foi cumprido e todos os rótulos foram corrigidos antes do regulamento ter tido aplicação obrigatória (dia 13 de dezembro de 2014). Conclui-se que a metodologia de correções aplicada teve um grau de sucesso de 100 %, pois não houve alterações posteriores (devido a erros cometidos) ao término de qualquer rótulo. Também se pode afirmar que o regulamento, apesar de vir simplificar e harmonizar o tema da rotulagem de géneros alimentícios, ainda deixou algumas lacunas, remetendo sempre para medidas nacionais.
Resumo:
This paper tells who can register to vote, registration deadline, acceptable forms of ID, how to register and what to do if you move.