913 resultados para Monitoring, SLA, JBoss, Middleware, J2EE, Java, Service Level Agreements
Resumo:
Diplomityön tavoitteena on määrittää case-yritykselle IT sovellusten käytettävyyden mittarit ja mittausprosessit, kartoittaa markkinoilla tarjollaolevien mittaustyökalujen mahdollisuuksia ja antaa suositus yrityksen tarpeisiin sopivimmista työkaluista. Aluksi työssä tutustutaan yleisesti sovellusten käytettävyyteen, sekä erityisesti sen mittaamiseen ja hallintaan. Case-yrityksen ulkoistettujen tietotekniikkapalveluiden johdosta tarkastelu keskittyy pääosin sovellusten monitorointiin, käytettävyyden hallintaan palveluntasosopimuksilla ja mittausprosesseihin. Lisäksi tutkitaan eri mittausmenetelmiä, määritetään yrityksen tarpeita ja tehdään vertailu työkaluista keskittyen toiminnallisiin ja teknisiin ominaisuuksiin. Päästä päähän sovellusten käytettävyyden mittauksen aloittaminen on iso projekti, josta tämä diplomityö kattaa vain osan. Lukuisat vaihtoehdot ovat harkittava tarkkaan liiketoiminnan kannalta parhaan hyödyn löytämiseksi.
Resumo:
The service quality of any sector has two major aspects namely technical and functional. Technical quality can be attained by maintaining technical specification as decided by the organization. Functional quality refers to the manner which service is delivered to customer which can be assessed by the customer feed backs. A field survey was conducted based on the management tool SERVQUAL, by designing 28 constructs under 7 dimensions of service quality. Stratified sampling techniques were used to get 336 valid responses and the gap scores of expectations and perceptions are analyzed using statistical techniques to identify the weakest dimension. To assess the technical aspects of availability six months live outage data of base transceiver were collected. The statistical and exploratory techniques were used to model the network performance. The failure patterns have been modeled in competing risk models and probability distribution of service outage and restorations were parameterized. Since the availability of network is a function of the reliability and maintainability of the network elements, any service provider who wishes to keep up their service level agreements on availability should be aware of the variability of these elements and its effects on interactions. The availability variations were studied by designing a discrete time event simulation model with probabilistic input parameters. The probabilistic distribution parameters arrived from live data analysis was used to design experiments to define the availability domain of the network under consideration. The availability domain can be used as a reference for planning and implementing maintenance activities. A new metric is proposed which incorporates a consistency index along with key service parameters that can be used to compare the performance of different service providers. The developed tool can be used for reliability analysis of mobile communication systems and assumes greater significance in the wake of mobile portability facility. It is also possible to have a relative measure of the effectiveness of different service providers.
Resumo:
The Grid is a large-scale computer system that is capable of coordinating resources that are not subject to centralised control, whilst using standard, open, general-purpose protocols and interfaces, and delivering non-trivial qualities of service. In this chapter, we argue that Grid applications very strongly suggest the use of agent-based computing, and we review key uses of agent technologies in Grids: user agents, able to customize and personalise data; agent communication languages offering a generic and portable communication medium; and negotiation allowing multiple distributed entities to reach service level agreements. In the second part of the chapter, we focus on Grid service discovery, which we have identified as a prime candidate for use of agent technologies: we show that Grid-services need to be located via personalised, semantic-rich discovery processes, which must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. We present UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. The outcome is a flexible service registry which is compatible with existing standards and also provides metadata-enhanced service discovery.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^
Resumo:
The adoption of Augmented Reality (AR) technologies can make the provision of field services to industrial equipment more effective. In these situations, the cost of deploying skilled technicians in geographically dispersed locations must be accurately traded off with the risks of not respecting the service level agreements with the customers. This paper, through the case study of a leading OEM in the production printing industry, presents the challenges that have to be faced in order to favour the adoption of a particular kind of AR named Mobile Collaborative Augmented Reality (MCAR). In particular, this study uses both qualitative and quantitative research. Firstly, a demonstration to show how MCAR can support field service was settled in order to achieve information about the use experience of the people involved. Then, the entire field force of Océ Italia – Canon Group was surveyed in order to investigate quantitatively the technicians’ perceptions about the usefulness and ease of use of MCAR, as well as their intentions to use this technology.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
Resumo:
Os prestadores de serviços de telecomunicações e operadores de telecomunicações deparam-se com um aumento exponencial das necessidades de largura de banda. A evolução e massificação dos serviços Internet e Intranet pelos serviços públicos e privados deixaram de ser uma mera adaptação do protocolo TCP, à qualidade da ligação sendo uma necessidade a diferenciação do tráfego. As metodologias que asseguram uma qualidade de serviço no âmbito dos fornecedores de serviços internet são a forma de garantir uma qualidade de serviço adequada a cada tipo de tráfego. Estas metodologias são suportadas pela rede IP MPLS dos diversos operadores de telecomunicações no transporte dos diversos serviços dos seus clientes empresarias e domésticos no acesso à internet dos diversos serviços públicos de dados e voz e nas redes virtuais privadas. Os portais aplicacionais são a interface directa com o cliente para definição dos acordos de nível de serviço “Service Level Agreements” e a sua associação à especificação dos níveis de serviço “Service Level Specification”, para posterior relação com a definição de métricas adequadas à qualidade de serviço acordada com o cliente no desenho dos serviços de uma rede IP “MultiProtocol Label Switch”. A proposta consiste em criar uma metodologia para mapear as necessidades de serviços dos clientes em SLAs e registá-los numa base de dados, separando claramente a qualidade do serviço vista na óptica do operador em: arquitectura de rede de transporte, arquitectura do serviço e arquitectura de monitoria. Estes dados são mapeados em parâmetros e especificações de implementação dos serviços de suporte ao negócio do operador tendo em vista a criação de um “Work Flow” fim a fim. Paralelamente define-se os serviços a disponibilizar comercialmente, o conjunto de serviços suportados pela rede e tecnologia IP MPLS com a parametrização de ”Quality of Service Assurance” adequada a cada um, cria-se uma arquitectura de rede de suporte ao transporte base entre os diversos equipamentos agregadores de acessos através do “Backbone”, define-se uma arquitectura de suporte para cada tipo de serviço independente da arquitectura de transporte. Neste trabalho implementam-se algumas arquitecturas de QoS estudadas no IP MPLS em simuladores disponibilizados pela comunidade “Open Source” e analisamos as vantagens de desvantagens de cada uma. Todas as necessidades são devidamente equacionadas, prevendo o seu crescimento, desempenho, estabelecendo regras de distribuição de largura de banda e análise de desempenho, criando redes escaláveis e com estimativas de crescimento optimistas. Os serviços são desenhados de forma a adaptarem-se à evolução das necessidades aplicacionais, ao crescimento do número de utilizadores e evolução do próprio serviço.
Resumo:
Dissertação apresentada ao Instituto Superior de Administração e Contabilidade do Porto para obtenção do Grau de Mestre em Logística Orientada por: Professora Doutora Maria Clara Rodrigues Bento Vaz Fernandes
Resumo:
Työn tavoitteena on selvittää palvelutasosopimuksella ja sen hallinnalla saatavaa hyötyä toimittajan hallinnassa. Tutkimusongelmaa on lähdetty selvittämään tutustumalla ensin lyhyesti toimittajan hallintaan ja laajemmin palvelutasosopimuksista ja niiden hallinnasta olemassa olevaan teoria-aineistoon. Tämän teoriapohjan perusteella palvelutasosopimuksia ja niiden hallintaa on tutkittu käytännössä case-yrityksen valossa, mahdollisimman realistiseen lopputulokseen pääsemiseksi. Palvelujen ulkoistaminen tulee tulevaisuudessa lisääntymään ja tätä kautta yritysten onnistuminen markkinoilla on yhä riippuvaisempaa niiden kyvystä hallita toimittajiaan. Palvelutasosopimukset ja prosessit niiden ympärillä ovat hyödyllisiä työkaluja toimittajan suorituksen valvonnassa ja kehittämisessä sekä toimittajasuhteen luotsaamisessa kohti kumppanuussuhdetta. Tämä edellyttää kuitenkin, että kauppakumppanit sisäistävät palvelutasosopimusten kautta saavutettavat hyödyt ja sitoutuvat koko palvelutason hallintaprosessiin.
Resumo:
IT outsourcing refers to the way companies focus on their core competencies and buy the supporting functions from other companies specialized in that area. Service is the total outcome of numerous of activities by employees and other resources to provide solutions to customers' problems. Outsourcing and service business have their unique characteristics. Service Level Agreements quantify the minimum acceptable service to the user. The service quality has to be objectively quantified so that its achievement or non-achievement of it can be monitored. Usually offshoring refers to the transferring of tasks to low-cost nations. Offshoring presents a lot of challenges that require special attention and they need to be assessed thoroughly. IT Infrastructure management refers to installation and basic usability assistance of operating systems, network and server tools and utilities. ITIL defines the industry best practices for organizing IT processes. This thesis did an analysis of server operations service and the customers’ perception of the quality of daily operations. The agreed workflows and processes should be followed better. Service providers’ processes are thoroughly defined but both the customer and the service provider might disobey them. Service provider should review the workflows regarding customer functions. Customer facing functions require persistent skill development, as they communicate the quality to the customer. Service provider needs to provide better organized communication and knowledge exchange methods between the specialists in different geographical locations.
Resumo:
FTTH-liityntäverkkoja voivat rakennuttaa ja omistaa perinteisten teleoperaattoreiden lisäksi myös muut toimijat, kuten sähköyhtiöt, valtion- ja kuntien omistamat yhtiöt ja yksityiset osuuskunnat. Markkinatoimijoiden ja eri teknologioiden keskinäinen kilpailu johtaa kuitenkin helposti pieniin liityntätiheyksiin ja hajanaisiin pieniin verkkoihin. Tämä tekee verkkojen kannattavasta rakentamisesta vaikeaa ja palvelujen toteuttamisesta tehotonta. Tilannetta voidaan parantaa tasapuolisella pääsyllä, joka selkeyttää arvoketjun osapuolten rooleja. Diplomityön tavoitteena oli kuvata kommunikaatio-operaattorin rooli Suomen telemarkkinaan ja rakentaa sille taloudellisesti kannattava liiketoimintamalli toisen osapuolen toteuttamiin tasapuolisen pääsyn FTTH-verkkoihin. Työ toteutettiin hyödyntäen konstruktiivista tutkimusotetta ja sen lopputuloksena syntyi soveltuva liiketoimintamalli. Kommunikaatio-operaattorin kaksipuolisella alustalla yhdistetään valtakunnallisesti toisiinsa palveluntarjoajat ja alueellisiin liityntäverkkoihin liittyneet loppuasiakkaat. Kommunikaatio-operaattorin liiketoimintamalli mahdollistaa kapasiteetin, palvelunlaadun ja toimintavarmuuden suhteen erilaistetut virtuaaliset tukkutuotteet ja sitä kautta mahdollisuuden uuteen ansaintaan.
Resumo:
Traditional resource management has had as its main objective the optimization of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The Self-organizing ICT Resource Management (SORMA) project aims at allowing resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA's motivation is to achieve efficient resource utilization by maximizing revenue for resource providers and minimizing the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that the desired quality of service levels meet the expectations of market participants. This paper explains the proposed use of an economically enhanced resource manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximization across multiple service level agreements and provides an application scenario to demonstrate its usefulness and effectiveness. Copyright © 2008 John Wiley & Sons, Ltd.
Resumo:
MyGrid is an e-Science Grid project that aims to help biologists and bioinformaticians to perform workflow-based in silico experiments, and help them to automate the management of such workflows through personalisation, notification of change and publication of experiments. In this paper, we describe the architecture of myGrid and how it will be used by the scientist. We then show how myGrid can benefit from agents technologies. We have identified three key uses of agent technologies in myGrid: user agents, able to customize and personalise data, agent communication languages offering a generic and portable communication medium, and negotiation allowing multiple distributed entities to reach service level agreements.
Resumo:
Organizações de TI deparam-se com problemas em contratos de TI (produtos e serviços) tais como: definição inadequada ou insuficiente dos produtos/serviços contratados; indefinição de responsabilidades; SLAs (Service Level Agreements) inadequados ou inexistentes; ausência ou inadequação de cláusulas de penalidades e métodos claros e objetivos de mensuração dos produtos/serviços entregues; sobreposição ou lacunas entre contratos de diferentes fornecedores
Resumo:
The Mobile Cloud Networking project develops among others, several virtualized services and applications, in particular: (1) IP Multimedia Subsystem as a Service that gives the possibility to deploy a virtualized and on-demand instance of the IP Multimedia Subsystem platform, (2) Digital Signage Service as a Service that is based on a re-designed Digital Signage Service architecture, adopting the cloud computing principles, and (3) Information Centric Networking/Content Delivery Network as a Service that is used for distributing, caching and migrating content from other services. Possible designs for these virtualized services and applications have been identified and are being implemented. In particular, the architectures of the mentioned services were specified, adopting cloud computing principles, such as infrastructure sharing, elasticity, on-demand and pay-as-you-go. The benefits of Reactive Programming paradigm are presented in the context of Interactive Cloudified Digital Signage services in a Mobile Cloud Platform, as well as the benefit of interworking between different Mobile Cloud Networking Services as Digital Signage Service and Content Delivery Network Service for better performance of Video on Demand content deliver. Finally, the management of Service Level Agreements and the support of rating, charging and billing has also been considered and defined.