30 resultados para IT Service Management
Resumo:
This paper applies a policy analysis approach to the question of how to effectively regulate micropollution in a sustainable manner. Micropollution is a complex policy problem characterized by a huge number and diversity of chemical substances, as well as various entry paths into the aquatic environment. It challenges traditional water quality management by calling for new technologies in wastewater treatment and behavioral changes in industry, agriculture and civil society. In light of such challenges, the question arises as to how to regulate such a complex phenomenon to ensure water quality is maintained in the future? What can we learn from past experiences in water quality regulation? To answer these questions, policy analysis strongly focuses on the design and choice of policy instruments and the mix of such measures. In this paper, we review instruments commonly used in past water quality regulation. We evaluate their ability to respond to the characteristics of a more recent water quality problem, i.e., micropollution, in a sustainable way. This way, we develop a new framework that integrates both the problem dimension (i.e., causes and effects of a problem) as well as the sustainability dimension (e.g., long-term, cross-sectoral and multi-level) to assess which policy instruments are best suited to regulate micropollution. We thus conclude that sustainability criteria help to identify an appropriate instrument mix of end-of-pipe and source-directed measures to reduce aquatic micropollution.
Resumo:
Content Distribution Networks are mandatory components of modern web architectures, with plenty of vendors offering their services. Despite its maturity, new paradigms and architecture models are still being developed in this area. Cloud Computing, on the other hand, is a more recent concept which has expanded extremely quickly, with new services being regularly added to cloud management software suites such as OpenStack. The main contribution of this paper is the architecture and the development of an open source CDN that can be provisioned in an on-demand, pay-as-you-go model thereby enabling the CDN as a Service paradigm. We describe our experience with integration of CDNaaS framework in a cloud environment, as a service for enterprise users. We emphasize the flexibility and elasticity of such a model, with each CDN instance being delivered on-demand and associated to personalized caching policies as well as an optimized choice of Points of Presence based on exact requirements of an enterprise customer. Our development is based on the framework developed in the Mobile Cloud Networking EU FP7 project, which offers its enterprise users a common framework to instantiate and control services. CDNaaS is one of the core support components in this project as is tasked to deliver different type of multimedia content to several thousands of users geographically distributed. It integrates seamlessly in the MCN service life-cycle and as such enjoys all benefits of a common design environment, allowing for an improved interoperability with the rest of the services within the MCN ecosystem.
Resumo:
Aims: Patient management following elective cranial surgery varies between different neurosurgical institutions. Early routine postoperative cranial computed tomography (CT) is often performed while keeping patients sedated and ventilated for several hours. We hypothesize that fast track management without routine CT scanning, i.e., early extubation within one hour allowing neurological monitoring, is safe and does not increase the rate of return to OR compared with published data. Methods: We prospectively screened 1118 patients with cranial procedures performed at our department over a period of two years. 420 patients with elective brain surgery older than 18 years with no history of prior cranial surgery were included. Routine neurosurgical practice as it is performed at our department was not altered for this observational study. Fast track management was aimed for all cases, extubated and awake patients were further monitored. CT scanning within 48 hours after surgery was not performed except for unexpected neurological deterioration. This study was registered at ClinicalTrials.gov (NCT01987648). Results: 420 elective craniotomies were performed for 310 supra- and 110 infratentorial lesions. 398 patients (94.8%) were able to be extubated within 1 hour, 21 (5%) within 6 hours, and 1 patient (0.2%) was extubated 9 hours after surgery. Emergency CT within 48 hours was performed for 36 patients (8.6%, 26 supra- and 10 infratentorial cases) due to unexpected neurological worsening. Of these 36 patients 5 had to return to the OR (hemorrhage in 3, swelling in 2 cases). Return to OR rate of all included cases was 1.2%. This rate compares favorably with 1-4% as quoted in the current literature. No patient returned to the OR without prior CT imaging. Of 398 patients extubated within one hour 2 (0.5%) returned to the OR. Patients who couldn’t be extubated within the first hour had a higher risk of returning to the OR (3 of 22, i.e., 14%). Overall 30-day mortality was 0.2% (1 patient). Conclusions: Early extubation and CT imaging performed only for patients with unexpected neurological worsening after elective craniotomy procedures is safe and does not increase patient mortality or the return to OR rate. With this fast track approach early postoperative cranial CT for detection of postoperative complications in the absence of an unexpected neurological finding is not justified. Acknowledgments The authors thank Nicole Söll, study nurse, Department of Neurosurgery, Bern University Hospital, Switzerland for crucial support in data collection and managing the database.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
The objective of this article is to demonstrate the feasibility of on-demand creation of cloud-based elastic mobile core networks, along with their lifecycle management. For this purpose the article describes the key elements to realize the architectural vision of EPC as a Service, an implementation option of the Evolved Packet Core, as specified by 3GPP, which can be deployed in cloud environments. To meet several challenging requirements associated with the implementation of EPC over a cloud infrastructure and providing it “as a Service,” this article presents a number of different options, each with different characteristics, advantages, and disadvantages. A thorough analysis comparing the different implementation options is also presented.
Resumo:
Recently telecommunication industry benefits from infrastructure sharing, one of the most fundamental enablers of cloud computing, leading to emergence of the Mobile Virtual Network Operator (MVNO) concept. The most momentous intents by this approach are the support of on-demand provisioning and elasticity of virtualized mobile network components, based on data traffic load. To realize it, during operation and management procedures, the virtualized services need be triggered in order to scale-up/down or scale-out/in an instance. In this paper we propose an architecture called MOBaaS (Mobility and Bandwidth Availability Prediction as a Service), comprising two algorithms in order to predict user(s) mobility and network link bandwidth availability, that can be implemented in cloud based mobile network structure and can be used as a support service by any other virtualized mobile network services. MOBaaS can provide prediction information in order to generate required triggers for on-demand deploying, provisioning, disposing of virtualized network components. This information can be used for self-adaptation procedures and optimal network function configuration during run-time operation, as well. Through the preliminary experiments with the prototype implementation on the OpenStack platform, we evaluated and confirmed the feasibility and the effectiveness of the prediction algorithms and the proposed architecture.
Resumo:
Location prediction has attracted a significant amount of research effort. Being able to predict users’ movement benefits a wide range of communication systems, including location-based service/applications, mobile access control, mobile QoS provision, and resource management for mobile computation and storage management. In this demo, we present MOBaaS, which is a cloudified Mobility and Bandwidth prediction services that can be instantiated, deployed, and disposed on-demand. Mobility prediction of MOBaaS provides location predictions of a single/group user equipments (UEs) in a future moment. This information can be used for self-adaptation procedures and optimal network function configuration during run-time operations. We demonstrate an example of real-time mobility prediction service deployment running on OpenStack platform, and the potential benefits it bring to other invoking services.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.