875 resultados para service level
Resumo:
One of the most challenging problems in mobile broadband networks is how to assign the available radio resources among the different mobile users. Traditionally, research proposals are either speci c to some type of traffic or deal with computationally intensive algorithms aimed at optimizing the delivery of general purpose traffic. Consequently, commercial networks do not incorporate these mechanisms due to the limited hardware resources at the mobile edge. Emerging 5G architectures introduce cloud computing principles to add flexible computational resources to Radio Access Networks. This paper makes use of the Mobile Edge Computing concepts to introduce a new element, denoted as Mobile Edge Scheduler, aimed at minimizing the mean delay of general traffic flows in the LTE downlink. This element runs close to the eNodeB element and implements a novel flow-aware and channel-aware scheduling policy in order to accommodate the transmissions to the available channel quality of end users.
Resumo:
A QoS adaptation to dynamically changing system conditions that takes into consideration the user’s constraints on the stability of service provisioning is presented. The goal is to allow the system to make QoS adaptation decisions in response to fluctuations in task traffic flow, under the control of the user. We pay special attention to the case where monitoring the stability period and resource load variation of Service Level Agreements for different types of services is used to dynamically adapt future stability periods, according to a feedback control scheme. System’s adaptation behaviour can be configured according to a desired confidence level on future resource usage. The viability of the proposed approach is validated by preliminary experiments.
Resumo:
This project aims to prepare Worten Empresas (WE) fulfilling the increasing market demand through process changings, focusing on the Portuguese market, particularly on internal B2B clients1. Several methods were used to measure the current service level provided - process mapping, resources assessment, benchmark and a survey. The results were then used to compare against service level actually desired by WE’s customer, and then to identify the performance gaps in response times and quality of the follow-up during the sales process. To bridge the identified gaps, both a set of recommendations and an implementation plan were suggested to improve and monitor customer experience. This study concluded that it is possible to fulfill the increasing level of demand and at the same time improve customer satisfaction by implementing changes at the operations level.
Resumo:
Zur Senkung von Kosten werden in vielen Unternehmen Dienstleistungen, die nicht zur Kernkompetenz gehören, an externe Dienstleister ausgelagert. Dieser Prozess wird auch als Outsourcing bezeichnet. Die dadurch entstehenden Abhängigkeiten zu den externen Dienstleistern werden mit Hilfe von Service Level Agreements (SLAs) vertraglich geregelt. Die Aufgabe des Service Level Managements (SLM) ist es, die Einhaltung der vertraglich fixierten Dienstgüteparameter zu überwachen bzw. sicherzustellen. Für eine automatische Bearbeitung ist daher eine formale Spezifikation von SLAs notwendig. Da der Markt eine Vielzahl von unterschiedlichen SLM-Werkzeugen hervorgebracht hat, entstehen in der Praxis Probleme durch proprietäre SLA-Formate und fehlende Spezifikationsmethoden. Daraus resultiert eine Werkzeugabhängigkeit und eine limitierte Wiederverwendbarkeit bereits spezifizierter SLAs. In der vorliegenden Arbeit wird ein Ansatz für ein plattformunabhängiges Service Level Management entwickelt. Ziel ist eine Vereinheitlichung der Modellierung, so dass unterschiedliche Managementansätze integriert und eine Trennung zwischen Problem- und Technologiedomäne erreicht wird. Zudem wird durch die Plattformunabhängigkeit eine hohe zeitliche Stabilität erstellter Modelle erreicht. Weiteres Ziel der Arbeit ist, die Wiederverwendbarkeit modellierter SLAs zu gewährleisten und eine prozessorientierte Modellierungsmethodik bereitzustellen. Eine automatisierte Etablierung modellierter SLAs ist für eine praktische Nutzung von entscheidender Relevanz. Zur Erreichung dieser Ziele werden die Prinzipien der Model Driven Architecture (MDA) auf die Problemdomäne des Service Level Managements angewandt. Zentrale Idee der Arbeit ist die Definition von SLA-Mustern, die konfigurationsunabhängige Abstraktionen von Service Level Agreements darstellen. Diese SLA-Muster entsprechen dem Plattformunabhängigen Modell (PIM) der MDA. Durch eine geeignete Modelltransformation wird aus einem SLA-Muster eine SLA-Instanz generiert, die alle notwendigen Konfigurationsinformationen beinhaltet und bereits im Format der Zielplattform vorliegt. Eine SLA-Instanz entspricht damit dem Plattformspezifischen Modell (PSM) der MDA. Die Etablierung der SLA-Instanzen und die daraus resultierende Konfiguration des Managementsystems entspricht dem Plattformspezifischen Code (PSC) der MDA. Nach diesem Schritt ist das Managementsystem in der Lage, die im SLA vereinbarten Dienstgüteparameter eigenständig zu überwachen. Im Rahmen der Arbeit wurde eine UML-Erweiterung definiert, die eine Modellierung von SLA-Mustern mit Hilfe eines UML-Werkzeugs ermöglicht. Hierbei kann die Modellierung rein graphisch als auch unter Einbeziehung der Object Constraint Language (OCL) erfolgen. Für die praktische Realisierung des Ansatzes wurde eine Managementarchitektur entwickelt, die im Rahmen eines Prototypen realisiert wurde. Der Gesamtansatz wurde anhand einer Fallstudie evaluiert.
Resumo:
Traditional resource management has had as its main objective the optimisation of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid Markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The SORMA project aims to allow resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA’s motivation is to achieve efficient resource utilisation by maximising revenue for resource providers, and minimising the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that desired Quality of Service levels meet the expectations of market participants. This paper explains the proposed use of an Economically Enhanced Resource Manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximisation across multiple Service Level Agreements.
Resumo:
Traditional resource management has had as its main objective the optimisation of throughput, based on pa- rameters such as CPU, memory, and network bandwidth. With the appearance of Grid Markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The SORMA project aims to allow resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA’s motivation is to achieve efficient resource utilisation by maximising revenue for resource providers, and minimising the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that desired Quality of Service levels meet the expectations of market participants. This paper explains the proposed use of an Economically Enhanced Resource Manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximisation across multiple Service Level Agreements.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
Firms worldwide are taking major initiatives to reduce the carbon footprint of their supply chains in response to the growing governmental and consumer pressures. In real life, these supply chains face stochastic and non-stationary demand but most of the studies on inventory lot-sizing problem with emission concerns consider deterministic demand. In this paper, we study the inventory lot-sizing problem under non-stationary stochastic demand condition with emission and cycle service level constraints considering carbon cap-and-trade regulatory mechanism. Using a mixed integer linear programming model, this paper aims to investigate the effects of emission parameters, product- and system-related features on the supply chain performance through extensive computational experiments to cover general type business settings and not a specific scenario. Results show that cycle service level and demand coefficient of variation have significant impacts on total cost and emission irrespective of level of demand variability while the impact of product's demand pattern is significant only at lower level of demand variability. Finally, results also show that increasing value of carbon price reduces total cost, total emission and total inventory and the scope of emission reduction by increasing carbon price is greater at higher levels of cycle service level and demand coefficient of variation. The analysis of results helps supply chain managers to take right decision in different demand and service level situations.
Resumo:
Postprint
Resumo:
Service-oriented architectures and Web services mature and have become more widely accepted and used by industry. This growing adoption increased the demands for new ways of using Web service technology. Users start re-combining and mediating other providers’ services in ways that have not been anticipated by their original provider. Within organisations and cross-organisational communities, discoverable services are organised in repositories providing convenient access to adaptable end-to-end business processes. This idea is captured in the term Service Ecosystem. This paper addresses the question of how quality management can be performed in such service ecosystems. Service quality management is a key challenge when services are composed of a dynamic set of heterogeneous sub-services from different service providers. This paper contributes to this important area by developing a reference model of quality management in service ecosystems. We illustrate the application of the reference model in an exploratory case study. With this case study, we show how the reference model helps to derive requirements for the implementation and support of quality management in an exemplary service ecosystem in public administration.
Resumo:
PURPOSE Every health care sector including hospice/palliative care needs to systematically improve services using patient-defined outcomes. Data from the national Australian Palliative Care Outcomes Collaboration aims to define whether hospice/palliative care patients' outcomes and the consistency of these outcomes have improved in the last 3 years. METHODS Data were analysed by clinical phase (stable, unstable, deteriorating, terminal). Patient-level data included the Symptom Assessment Scale and the Palliative Care Problem Severity Score. Nationally collected point-of-care data were anchored for the period July-December 2008 and subsequently compared to this baseline in six 6-month reporting cycles for all services that submitted data in every time period (n = 30) using individual longitudinal multi-level random coefficient models. RESULTS Data were analysed for 19,747 patients (46 % female; 85 % cancer; 27,928 episodes of care; 65,463 phases). There were significant improvements across all domains (symptom control, family care, psychological and spiritual care) except pain. Simultaneously, the interquartile ranges decreased, jointly indicating that better and more consistent patient outcomes were being achieved. CONCLUSION These are the first national hospice/palliative care symptom control performance data to demonstrate improvements in clinical outcomes at a service level as a result of routine data collection and systematic feedback.
Resumo:
Service compositions enable users to realize their complex needs as a single request. Despite intensive research, especially in the area of business processes, web services and grids, an open and valid question is still how to manage service compositions in order to satisfy both functional and non-functional requirements as well as adapt to dynamic changes. In this paper we propose an (functional) architecture for adaptive management of QoS-aware service compositions. Comparing to the other existing architectures this one offers two major advantages. Firstly, this architecture supports various execution strategies based on dynamic selection and negotiation of services included in a service composition, contracting based on service level agreements, service enactment with flexible support for exception handling, monitoring of service level objectives, and profiling of execution data. Secondly, the architecture is built on the basis of well know existing standards to communicate and exchange data, which significantly reduces effort to integrate existing solutions and tools from different vendors. A first prototype of this architecture has been implemented within an EU-funded Adaptive Service Grid project. © 2006 Springer-Verlag.
Resumo:
This paper reports on outcomes of Phases One and Two of the ALTC Competitive Research and Development Project "Developing Strategies at the Pre-Service Level to Address Critical Teacher Attraction and Retention Issues in Australian Rural, Regional and Remote Schools." This project funded over two years aims to strengthen the capacity and credibility of universities to prepare rural, regional and remote educators, similar to the capacity and credibility that has been created in preparing Australia's rural, regional and remote health workers. There is a strong recognition of the fundamental importance of quality teaching experiences rural, regional and remote schools and throughout this project over 200 pre-service teachers have participated in a curriculum module/object and completed a survey that encourages them to consider teaching in regional Western Australia. The project has mapped current Western Australian rural, regional and remote pre-service teacher education curriculum and field experience model. This mapping completed a comparison of national information with the identification of rural, regional and remote education curriculum and/or field experience models used nationally and internationally. In particular results from Phase One and Two will be presented reporting on the findings of the first year of the project.