981 resultados para SLA-Technologie
Resumo:
Artificial neural networks (ANNs) have shown great promise in modeling circuit parameters for computer aided design applications. Leakage currents, which depend on process parameters, supply voltage and temperature can be modeled accurately with ANNs. However, the complex nature of the ANN model, with the standard sigmoidal activation functions, does not allow analytical expressions for its mean and variance. We propose the use of a new activation function that allows us to derive an analytical expression for the mean and a semi-analytical expression for the variance of the ANN-based leakage model. To the best of our knowledge this is the first result in this direction. Our neural network model also includes the voltage and temperature as input parameters, thereby enabling voltage and temperature aware statistical leakage analysis (SLA). All existing SLA frameworks are closely tied to the exponential polynomial leakage model and hence fail to work with sophisticated ANN models. In this paper, we also set up an SLA framework that can efficiently work with these ANN models. Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.
Resumo:
This paper presents an intelligent procurement marketplace for finding the best mix of web services to dynamically compose the business process desired by a web service requester. We develop a combinatorial auction approach that leads to an integer programming formulation for the web services composition problem. The model takes into account the Quality of Service (QoS) and Service Level Agreements (SLA) for differentiating among multiple service providers who are capable of fulfilling a functionality. An important feature of the model is interface aware composition.
Resumo:
We investigate the impact of the Indian Ocean Dipole (IOD) and El Nino and the Southern Oscillation (ENSO) on sea level variations in the North Indian Ocean during 1957-2008. Using tide-gauge and altimeter data, we show that IOD and ENSO leave characteristic signatures in the sea level anomalies (SLAs) in the Bay of Bengal. During a positive IOD event, negative SLAs are observed during April-December, with the SLAs decreasing continuously to a peak during September-November. During El Nino, negative SLAs are observed twice (April-December and November-July), with a relaxation between the two peaks. SLA signatures during negative IOD and La Nina events are much weaker. We use a linear, continuously stratified model of the Indian Ocean to simulate their sea level patterns of IOD and ENSO events. We then separate solutions into parts that correspond to specific processes: coastal alongshore winds, remote forcing from the equator via reflected Rossby waves, and direct forcing by interior winds within the bay. During pure IOD events, the SLAs are forced both from the equator and by direct wind forcing. During ENSO events, they are primarily equatorially forced, with only a minor contribution from direct wind forcing. Using a lead/lag covariance analysis between the Nino-3.4 SST index and Indian Ocean wind stress, we derive a composite wind field for a typical El Nino event: the resulting solution has two negative SLA peaks. The IOD and ENSO signatures are not evident off the west coast of India.
Resumo:
Monitoring of infrastructural resources in clouds plays a crucial role in providing application guarantees like performance, availability, and security. Monitoring is crucial from two perspectives - the cloud-user and the service provider. The cloud user’s interest is in doing an analysis to arrive at appropriate Service-level agreement (SLA) demands and the cloud provider’s interest is to assess if the demand can be met. To support this, a monitoring framework is necessary particularly since cloud hosts are subject to varying load conditions. To illustrate the importance of such a framework, we choose the example of performance being the Quality of Service (QoS) requirement and show how inappropriate provisioning of resources may lead to unexpected performance bottlenecks. We evaluate existing monitoring frameworks to bring out the motivation for building much more powerful monitoring frameworks. We then propose a distributed monitoring framework, which enables fine grained monitoring for applications and demonstrate with a prototype system implementation for typical use cases.
Resumo:
Service systems are labor intensive. Further, the workload tends to vary greatly with time. Adapting the staffing levels to the workloads in such systems is nontrivial due to a large number of parameters and operational variations, but crucial for business objectives such as minimal labor inventory. One of the central challenges is to optimize the staffing while maintaining system steady-state and compliance to aggregate SLA constraints. We formulate this problem as a parametrized constrained Markov process and propose a novel stochastic optimization algorithm for solving it. Our algorithm is a multi-timescale stochastic approximation scheme that incorporates a SPSA based algorithm for ‘primal descent' and couples it with a ‘dual ascent' scheme for the Lagrange multipliers. We validate this optimization scheme on five real-life service systems and compare it with a state-of-the-art optimization tool-kit OptQuest. Being two orders of magnitude faster than OptQuest, our scheme is particularly suitable for adaptive labor staffing. Also, we observe that it guarantees convergence and finds better solutions than OptQuest in many cases.
Resumo:
Glaciers have a direct relation with climate change. The equilibrium line altitude (ELA) is the most useful parameter to study the effect of climate change on glaciers. The ELA is a theoretical snowline at which the glacier mass balance is zero. Snowline altitude (SLA) at the end of melting season is generally regarded as the ELA. Glaciers of Chandra-Bhaga basin in Lahaul-Spiti district of Himachal Pradesh were chosen to study the ELA, using satellite images from 1980 to 2007. A total of 19 glaciers from the Chandra-Bhaga basin were identified and selected to carry out the study of ELA variation over 27 years. This study reveals that the mean SLA of the sub-basin has increased from 5009 +/- 61m to 5401 +/- 21m from 1980 to 2007. The study is in agreement with the reported increase in the temperature and decrease in winter snowfall of North-West Himalaya in the last century.
Resumo:
Elasticity in cloud systems provides the flexibility to acquire and relinquish computing resources on demand. However, in current virtualized systems resource allocation is mostly static. Resources are allocated during VM instantiation and any change in workload leading to significant increase or decrease in resources is handled by VM migration. Hence, cloud users tend to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. A more flexible and adaptive resource allocation mechanism would benefit variable workloads, such as those characterized by web servers. In this paper, we present an elastic resources framework for IaaS cloud layer that addresses this need. The framework provisions for application workload forecasting engine, that predicts at run-time the expected demand, which is input to the resource manager to modulate resource allocation based on the predicted demand. Based on the prediction errors, resources can be over-allocated or under-allocated as compared to the actual demand made by the application. Over-allocation leads to unused resources and under allocation could cause under performance. To strike a good trade-off between over-allocation and under-performance we derive an excess cost model. In this model excess resources allocated are captured as over-allocation cost and under-allocation is captured as a penalty cost for violating application service level agreement (SLA). Confidence interval for predicted workload is used to minimize this excess cost with minimal effect on SLA violations. An example case-study for an academic institute web server workload is presented. Using the confidence interval to minimize excess cost, we achieve significant reduction in resource allocation requirement while restricting application SLA violations to below 2-3%.
Resumo:
We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.
Resumo:
209 p. : graf.
Resumo:
Die 150. Reise des FFS "Walther Herwig III" wurde im Juli/August 1994 als Gemeinschaftsprojekt der Institute für Biochemie und Technologie und dem Institut für Fischereiökologie durchgeführt, um einen Überblick über die Schadstoffbelastung (anorganische und organische Schadstoffe, Radioaktivität) von Wasser, Sediment und Biota in der Barentssee zu erhalten. Das Institut für Biochemie und Technologie hat in diesem Forschungsprogramm den Teil "Bestimmung der Schwermetallgehalte im verzehrbaren Anteil (Filet) von Fischen und anderen Meerestieren" übernommen.
Resumo:
Esta dissertação de mestrado tem por objetivo entender as transformações espaciais ocorridas no município de Petrópolis/RJ, a partir do surgimento de novas centralidades que começam a despontar principalmente a partir da década de 1980. A proposta do presente trabalho também se baseia no estudo da formação de uma economia de serviços no município, como um estágio avançado das economias capitalistas, a partir do forte declínio de sua anterior base industrial e muito incipiente base agrícola, apesar de se tratar de uma cidade média. Desta maneira, Petrópolis começa a estimular determinados ramos do setor terciário que são extremamente importantes e que ainda apresentam a possibilidade de movimentar outros ramos e setores, como por exemplo, turismo e lazer, moda (roupas e confecções), gastronomia, decoração/design e alta tecnologia. No entanto, o estímulo ao desenvolvimento de uma economia de serviços passa necessariamente por ações implementadas pelo poder público, visando remover obstáculos à atração de novos atores que investirão em cidades que atendam às suas necessidades. Por isso, além de políticas públicas, o município investe em infraestrutura urbana para alinhar-se ao empresariamento ou empreendedorismo de cidades, esperando, assim, adentrar ao circuito das city marketing. Mas este modelo de desenvolvimento terciário criou diferenças intra-urbanas no município de Petrópolis quando estimulou o surgimento de novas centralidades que, para nós, se materializa principalmente na imagem do terceiro distrito Itaipava. Esta centralidade apresenta especificidades que a torna lócus importante para uma abastada classe social que pretende não depender de deslocamentos mais longos até o distrito-Sede/Centro Histórico para consumir bens e serviços. Logo, a nova centralidade Itaipava surgirá como um espaço especializado em turismo e lazer, gastronomia, decoração/design e shopping centers, atendendo às exigências mais específicas de uma classe abastada. Porém, mesmo existindo uma nova centralidade, em nenhum momento haverá neste trabalho a construção da ideia de um centro tradicional e histórico esfacelado e agonizante. Muito pelo contrário, o que se percebeu em Petrópolis foi o inverso: o distrito-Sede/Centro Histórico continua sendo o mais dinâmico dentro do território municipal e, ao contrário daquilo observado em grandes cidades, não há uma perda de sua importância, mas sim hierarquias diferentes que se complementam. Prova disso é a sequência de dados que será apresentada no último capítulo. Enfim, um município de porte médio se qualifica enquanto uma economia de serviços, espraiando-se para as áreas afastadas do centro onde, mesmo assim, o centro tradicional não perde sua hegemonia, apesar do surgimento de novas centralidades.
Resumo:
Homenaje a Georges Laplace, realizado en Vitoria-Gasteiz el 13, 14 y 15 de noviembre de 2012. Edición a cargo de Aitor Calvo, Aitor Sánchez, Maite García-Rojas y Mónica Alonso-Eguíluz