929 resultados para Service level agreement


Relevância:

90.00% 90.00%

Publicador:

Resumo:

IP-verkkojen hyvin tunnettu haitta on, että nämä eivät pysty takaamaan tiettyä palvelunlaatua (Quality of Service) lähetetyille paketeille. Seuraavat kaksi tekniikkaa pidetään lupaavimpina palvelunlaadun tarjoamiselle: Differentiated Services (DiffServ) ja palvelunlaatureititys (QoS Routing). DiffServ on varsin uusi IETF:n määrittelemä Internetille tarkoitettu palvelunlaatumekanismi. DiffServ tarjoaa skaalattavaa palvelujen erilaistamista ilman viestintää joka hypyssä ja per-flow –tilan ohjausta. DiffServ on hyvä esimerkki hajautetusta verkkosuunnittelusta. Tämän palvelutasomekanismin tavoite on viestintäjärjestelmien suunnittelun yksinkertaistaminen. Verkkosolmu voidaan rakentaa pienestä hyvin määritellystä rakennuspalikoiden joukosta. Palvelunlaatureititys on reititysmekanismi, jolla liikennereittejä määritellään verkon käytettävissä olevien resurssien pohjalta. Tässä työssä selvitetään uusi palvelunlaatureititystapa, jota kutsutaan yksinkertaiseksi monitiereititykseksi (Simple Multipath Routing). Tämän työn tarkoitus on suunnitella palvelunlaatuohjain DiffServille. Tässä työssä ehdotettu palvelunlaatuohjain on pyrkimys yhdistää DiffServ ja palvelunlaatureititysmekanismeja. Työn kokeellinen osuus keskittyy erityisesti palvelunlaatureititysalgoritmeihin.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Uusien paperikoneiden kysyntä on vähentynyt ja jälkimarkkinointipalveluiden, kuten huoltojen ja varaosamyyntien, merkittävyys paperikoneliiketoiminnassa on kasvanut entisestään viime aikoina. Uudentyyppisiä palveluja kilpailuedun lisäämiseksi kehitellään jatkuvasti. Esimerkki tällaisesta palvelusta on sopimusperusteinen varastointipalvelu, jossa osat ovat myyjän varastossa kunnes asiakas ottaa ne käyttöön. Diplomityön tavoite on rakentaa malli varastoinnin kustannuslaskentaan ja laskea sen avulla varastointipalvelun kustannukset. Perinteinen toimitusketju monine varastoineen ei nykykäsityksen mukaan ole enää kustannustehokas. Yhä useammat yritykset kaupan ja teollisuuden aloilla ovat ryhtyneet soveltamaan VMI (Vendor Managed Inventory) teoriaa toimitusketjuissaan. Varastot ovat tällöin keskitettyjä, tiedonkulku toimitusketjun portaiden välillä on nopeaa ja kysyntään pystytään vastaamaan lyhyemmällä viiveellä sen ennakoitavuuden paranemisen takia. Työn tuloksena on toimintolaskentaan pohjautuva kustannuslaskentamalli, jota voidaan hyödyntää myös hinnoittelupäätöksiä tehtäessä. Työssä esitellään mallin soveltaminen eri tapauksiin ja ehdotetaan jatkotoimenpiteitä.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As primary objective, this thesis examines Finnair Technical Procurement’s service quality with its underlying process. As an internal unit, Technical Procurement serves as a link between external suppliers and internal customers. It is argued that external service quality requires a certain quality level within an organization. At the same time, aircraft maintenance business is subject to economic restraints. Therefore, a methodology was developed with a modified House of Quality that assists management in analyzing and evaluating Technical Procurement’s service level and connected process steps. It could be shown that qualitative and quantitative objectives do not exclude each other per se.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Customer satisfaction should be the main focus for all of the parts of the business. Usually supply chain behind the business is in a key role when this focus is pursued especially in repair service business. When focusing on the materials that are needed to make repairs to equipment under service contracts, the time aspect of quality is critical. Do late deliveries from supplier have an effect on the service performance of repairs when distribution center of a centralized purchasing unit is acting as a buffer between suppliers and repair service business? And if so, how should the improvement efforts be prioritized? These are the two main questions that this thesis focuses on. Correlation and linear regression was tested between service levels of supplier and distribution center. Percentage of on-time deliveries were compared to outbound delivery service level. It was found that there is statistically significant correlation between inbound and outbound operations success. The other main question of the thesis, improvement prioritization, was answered by creating material availability based supplier classification and additional to that, by developing the decision process for the analysis of most critical suppliers. This was built on a basis of previous supplier and material classification methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The service quality of any sector has two major aspects namely technical and functional. Technical quality can be attained by maintaining technical specification as decided by the organization. Functional quality refers to the manner which service is delivered to customer which can be assessed by the customer feed backs. A field survey was conducted based on the management tool SERVQUAL, by designing 28 constructs under 7 dimensions of service quality. Stratified sampling techniques were used to get 336 valid responses and the gap scores of expectations and perceptions are analyzed using statistical techniques to identify the weakest dimension. To assess the technical aspects of availability six months live outage data of base transceiver were collected. The statistical and exploratory techniques were used to model the network performance. The failure patterns have been modeled in competing risk models and probability distribution of service outage and restorations were parameterized. Since the availability of network is a function of the reliability and maintainability of the network elements, any service provider who wishes to keep up their service level agreements on availability should be aware of the variability of these elements and its effects on interactions. The availability variations were studied by designing a discrete time event simulation model with probabilistic input parameters. The probabilistic distribution parameters arrived from live data analysis was used to design experiments to define the availability domain of the network under consideration. The availability domain can be used as a reference for planning and implementing maintenance activities. A new metric is proposed which incorporates a consistency index along with key service parameters that can be used to compare the performance of different service providers. The developed tool can be used for reliability analysis of mobile communication systems and assumes greater significance in the wake of mobile portability facility. It is also possible to have a relative measure of the effectiveness of different service providers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Grid is a large-scale computer system that is capable of coordinating resources that are not subject to centralised control, whilst using standard, open, general-purpose protocols and interfaces, and delivering non-trivial qualities of service. In this chapter, we argue that Grid applications very strongly suggest the use of agent-based computing, and we review key uses of agent technologies in Grids: user agents, able to customize and personalise data; agent communication languages offering a generic and portable communication medium; and negotiation allowing multiple distributed entities to reach service level agreements. In the second part of the chapter, we focus on Grid service discovery, which we have identified as a prime candidate for use of agent technologies: we show that Grid-services need to be located via personalised, semantic-rich discovery processes, which must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. We present UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. The outcome is a flexible service registry which is compatible with existing standards and also provides metadata-enhanced service discovery.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article outlines some of the issues involved in developing partnerships between service users, practitioners and researchers. It discusses these through some experience in Oslo as part of a national level agreement (HUSK) to improve social services in Norway through research and knowledge development. It begins with a review of the main concepts and debates involved in developing collaborative partnerships for practice-based research, particularly in the social services arena. The HUSK program is then described. The article then traces some specific developments and challenges in negotiating partnership relations as discussed by program participants (users, practitioners and researchers) in a series of workshops designed to elicit the issues directly from their experience.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Service compositions put together loosely-coupled component services to perform more complex, higher level, or cross-organizational tasks in a platform-independent manner. Quality-of-Service (QoS) properties, such as execution time, availability, or cost, are critical for their usability, and permissible boundaries for their values are defined in Service Level Agreements (SLAs). We propose a method whereby constraints that model SLA conformance and violation are derived at any given point of the execution of a service composition. These constraints are generated using the structure of the composition and properties of the component services, which can be either known or empirically measured. Violation of these constraints means that the corresponding scenario is unfeasible, while satisfaction gives values for the constrained variables (start / end times for activities, or number of loop iterations) which make the scenario possible. These results can be used to perform optimized service matching or trigger preventive adaptation or healing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Knowledge about the quality characteristics (QoS) of service com- positions is crucial for determining their usability and economic value. Ser- vice quality is usually regulated using Service Level Agreements (SLA). While end-to-end SLAs are well suited for request-reply interactions, more complex, decentralized, multiparticipant compositions (service choreographies) typ- ically involve multiple message exchanges between stateful parties and the corresponding SLAs thus encompass several cooperating parties with interde- pendent QoS. The usual approaches to determining QoS ranges structurally (which are by construction easily composable) are not applicable in this sce- nario. Additionally, the intervening SLAs may depend on the exchanged data. We present an approach to data-aware QoS assurance in choreographies through the automatic derivation of composable QoS models from partici- pant descriptions. Such models are based on a message typing system with size constraints and are derived using abstract interpretation. The models ob- tained have multiple uses including run-time prediction, adaptive participant selection, or design-time compliance checking. We also present an experimen- tal evaluation and discuss the benefits of the proposed approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in computer science and engineering, including: distributed, information, and telecommunication systems, networks and security, and service-oriented and grid computing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multi-national manufacturing companies are often faced with very difficult decisions regarding where and how to cost effectively manufacture products in a global setting. Clearly, they must utilize efficient and responsive manufacturing strategies to reach low cost solutions, but they must also consider the impact of manufacturing and transportation solutions upon their ability to support sales. One important sales consideration is determining how much work in process, in-transit stock, and finished goods to have on hand to support sales at a desired service level. This paper addresses this important consideration through a comprehensive scenario-based simulation approach, including sensitivity analysis on key study parameters. Results indicate that the inventory needs vary considerably for different manufacturing and delivery methods in ways that may not be obvious when using common evaluative tools.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The adoption of Augmented Reality (AR) technologies can make the provision of field services to industrial equipment more effective. In these situations, the cost of deploying skilled technicians in geographically dispersed locations must be accurately traded off with the risks of not respecting the service level agreements with the customers. This paper, through the case study of a leading OEM in the production printing industry, presents the challenges that have to be faced in order to favour the adoption of a particular kind of AR named Mobile Collaborative Augmented Reality (MCAR). In particular, this study uses both qualitative and quantitative research. Firstly, a demonstration to show how MCAR can support field service was settled in order to achieve information about the use experience of the people involved. Then, the entire field force of Océ Italia – Canon Group was surveyed in order to investigate quantitatively the technicians’ perceptions about the usefulness and ease of use of MCAR, as well as their intentions to use this technology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Os prestadores de serviços de telecomunicações e operadores de telecomunicações deparam-se com um aumento exponencial das necessidades de largura de banda. A evolução e massificação dos serviços Internet e Intranet pelos serviços públicos e privados deixaram de ser uma mera adaptação do protocolo TCP, à qualidade da ligação sendo uma necessidade a diferenciação do tráfego. As metodologias que asseguram uma qualidade de serviço no âmbito dos fornecedores de serviços internet são a forma de garantir uma qualidade de serviço adequada a cada tipo de tráfego. Estas metodologias são suportadas pela rede IP MPLS dos diversos operadores de telecomunicações no transporte dos diversos serviços dos seus clientes empresarias e domésticos no acesso à internet dos diversos serviços públicos de dados e voz e nas redes virtuais privadas. Os portais aplicacionais são a interface directa com o cliente para definição dos acordos de nível de serviço “Service Level Agreements” e a sua associação à especificação dos níveis de serviço “Service Level Specification”, para posterior relação com a definição de métricas adequadas à qualidade de serviço acordada com o cliente no desenho dos serviços de uma rede IP “MultiProtocol Label Switch”. A proposta consiste em criar uma metodologia para mapear as necessidades de serviços dos clientes em SLAs e registá-los numa base de dados, separando claramente a qualidade do serviço vista na óptica do operador em: arquitectura de rede de transporte, arquitectura do serviço e arquitectura de monitoria. Estes dados são mapeados em parâmetros e especificações de implementação dos serviços de suporte ao negócio do operador tendo em vista a criação de um “Work Flow” fim a fim. Paralelamente define-se os serviços a disponibilizar comercialmente, o conjunto de serviços suportados pela rede e tecnologia IP MPLS com a parametrização de ”Quality of Service Assurance” adequada a cada um, cria-se uma arquitectura de rede de suporte ao transporte base entre os diversos equipamentos agregadores de acessos através do “Backbone”, define-se uma arquitectura de suporte para cada tipo de serviço independente da arquitectura de transporte. Neste trabalho implementam-se algumas arquitecturas de QoS estudadas no IP MPLS em simuladores disponibilizados pela comunidade “Open Source” e analisamos as vantagens de desvantagens de cada uma. Todas as necessidades são devidamente equacionadas, prevendo o seu crescimento, desempenho, estabelecendo regras de distribuição de largura de banda e análise de desempenho, criando redes escaláveis e com estimativas de crescimento optimistas. Os serviços são desenhados de forma a adaptarem-se à evolução das necessidades aplicacionais, ao crescimento do número de utilizadores e evolução do próprio serviço.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud SLAs compensate customers with credits when average availability drops below certain levels. This is too inflexible because consumers lose non-measurable amounts of performance being only compensated later, in next charging cycles. We propose to schedule virtual machines (VMs), driven by range-based non-linear reductions of utility, different for classes of users and across different ranges of resource allocations: partial utility. This customer-defined metric, allows providers transferring resources between VMs in meaningful and economically efficient ways. We define a comprehensive cost model incorporating partial utility given by clients to a certain level of degradation, when VMs are allocated in overcommitted environments (Public, Private, Community Clouds). CloudSim was extended to support our scheduling model. Several simulation scenarios with synthetic and real workloads are presented, using datacenters with different dimensions regarding the number of servers and computational capacity. We show the partial utility-driven driven scheduling allows more VMs to be allocated. It brings benefits to providers, regarding revenue and resource utilization, allowing for more revenue per resource allocated and scaling well with the size of datacenters when comparing with an utility-oblivious redistribution of resources. Regarding clients, their workloads’ execution time is also improved, by incorporating an SLA-based redistribution of their VM’s computational power.