932 resultados para Service Level


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die Bedeutung des Dienstgüte-Managements (SLM) im Bereich von Unternehmensanwendungen steigt mit der zunehmenden Kritikalität von IT-gestützten Prozessen für den Erfolg einzelner Unternehmen. Traditionell werden zur Implementierung eines wirksamen SLMs Monitoringprozesse in hierarchischen Managementumgebungen etabliert, die einen Administrator bei der notwendigen Rekonfiguration von Systemen unterstützen. Auf aktuelle, hochdynamische Softwarearchitekturen sind diese hierarchischen Ansätze jedoch nur sehr eingeschränkt anwendbar. Ein Beispiel dafür sind dienstorientierte Architekturen (SOA), bei denen die Geschäftsfunktionalität durch das Zusammenspiel einzelner, voneinander unabhängiger Dienste auf Basis deskriptiver Workflow-Beschreibungen modelliert wird. Dadurch ergibt sich eine hohe Laufzeitdynamik der gesamten Architektur. Für das SLM ist insbesondere die dezentrale Struktur einer SOA mit unterschiedlichen administrativen Zuständigkeiten für einzelne Teilsysteme problematisch, da regelnde Eingriffe zum einen durch die Kapselung der Implementierung einzelner Dienste und zum anderen durch das Fehlen einer zentralen Kontrollinstanz nur sehr eingeschränkt möglich sind. Die vorliegende Arbeit definiert die Architektur eines SLM-Systems für SOA-Umgebungen, in dem autonome Management-Komponenten kooperieren, um übergeordnete Dienstgüteziele zu erfüllen: Mithilfe von Selbst-Management-Technologien wird zunächst eine Automatisierung des Dienstgüte-Managements auf Ebene einzelner Dienste erreicht. Die autonomen Management-Komponenten dieser Dienste können dann mithilfe von Selbstorganisationsmechanismen übergreifende Ziele zur Optimierung von Dienstgüteverhalten und Ressourcennutzung verfolgen. Für das SLM auf Ebene von SOA Workflows müssen temporär dienstübergreifende Kooperationen zur Erfüllung von Dienstgüteanforderungen etabliert werden, die sich damit auch über mehrere administrative Domänen erstrecken können. Eine solche zeitlich begrenzte Kooperation autonomer Teilsysteme kann sinnvoll nur dezentral erfolgen, da die jeweiligen Kooperationspartner im Vorfeld nicht bekannt sind und – je nach Lebensdauer einzelner Workflows – zur Laufzeit beteiligte Komponenten ausgetauscht werden können. In der Arbeit wird ein Verfahren zur Koordination autonomer Management-Komponenten mit dem Ziel der Optimierung von Antwortzeiten auf Workflow-Ebene entwickelt: Management-Komponenten können durch Übertragung von Antwortzeitanteilen untereinander ihre individuellen Ziele straffen oder lockern, ohne dass das Gesamtantwortzeitziel dadurch verändert wird. Die Übertragung von Antwortzeitanteilen wird mithilfe eines Auktionsverfahrens realisiert. Technische Grundlage der Kooperation bildet ein Gruppenkommunikationsmechanismus. Weiterhin werden in Bezug auf die Nutzung geteilter, virtualisierter Ressourcen konkurrierende Dienste entsprechend geschäftlicher Ziele priorisiert. Im Rahmen der praktischen Umsetzung wird die Realisierung zentraler Architekturelemente und der entwickelten Verfahren zur Selbstorganisation beispielhaft für das SLM konkreter Komponenten vorgestellt. Zur Untersuchung der Management-Kooperation in größeren Szenarien wird ein hybrider Simulationsansatz verwendet. Im Rahmen der Evaluation werden Untersuchungen zur Skalierbarkeit des Ansatzes durchgeführt. Schwerpunkt ist hierbei die Betrachtung eines Systems aus kooperierenden Management-Komponenten, insbesondere im Hinblick auf den Kommunikationsaufwand. Die Evaluation zeigt, dass ein dienstübergreifendes, autonomes Performance-Management in SOA-Umgebungen möglich ist. Die Ergebnisse legen nahe, dass der entwickelte Ansatz auch in großen Umgebungen erfolgreich angewendet werden kann.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In dieser Arbeit wird ein Verfahren zum Einsatz neuronaler Netzwerke vorgestellt, das auf iterative Weise Klassifikation und Prognoseschritte mit dem Ziel kombiniert, bessere Ergebnisse der Prognose im Vergleich zu einer einmaligen hintereinander Ausführung dieser Schritte zu erreichen. Dieses Verfahren wird am Beispiel der Prognose der Windstromerzeugung abhängig von der Wettersituation erörtert. Eine Verbesserung wird in diesem Rahmen mit einzelnen Ausreißern erreicht. Verschiedene Aspekte werden in drei Kapiteln diskutiert: In Kapitel 1 werden die verwendeten Daten und ihre elektronische Verarbeitung vorgestellt. Die Daten bestehen zum einen aus Windleistungshochrechnungen für die Bundesrepublik Deutschland der Jahre 2011 und 2012, welche als Transparenzanforderung des Erneuerbaren Energiegesetzes durch die Übertragungsnetzbetreiber publiziert werden müssen. Zum anderen werden Wetterprognosen, die der Deutsche Wetterdienst im Rahmen der Grundversorgung kostenlos bereitstellt, verwendet. Kapitel 2 erläutert zwei aus der Literatur bekannte Verfahren - Online- und Batchalgorithmus - zum Training einer selbstorganisierenden Karte. Aus den dargelegten Verfahrenseigenschaften begründet sich die Wahl des Batchverfahrens für die in Kapitel 3 erläuterte Methode. Das in Kapitel 3 vorgestellte Verfahren hat im modellierten operativen Einsatz den gleichen Ablauf, wie eine Klassifikation mit anschließender klassenspezifischer Prognose. Bei dem Training des Verfahrens wird allerdings iterativ vorgegangen, indem im Anschluss an das Training der klassenspezifischen Prognose ermittelt wird, zu welcher Klasse der Klassifikation ein Eingabedatum gehören sollte, um mit den vorliegenden klassenspezifischen Prognosemodellen die höchste Prognosegüte zu erzielen. Die so gewonnene Einteilung der Eingaben kann genutzt werden, um wiederum eine neue Klassifikationsstufe zu trainieren, deren Klassen eine verbesserte klassenspezifisch Prognose ermöglichen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El trabajo de grado propuesto pretende dar a conocer algunas características que poseen las 4 principales Universidades de Bogotá como lo son la Javeriana, los Andes, la Sabana y el Rosario en cuanto a educación virtual, infraestructura para soportar la red de Internet, nivel de penetración y el uso que le dan a esta herramienta informática los estudiantes de cada plantel educativo. Por consiguiente, se analizará el entorno internacional, como evolucionó el tema desde sus orígenes en los Estados Unidos, su situación actual y los alcances de la herramienta en los países del mundo. Además se mostrará cómo se encuentran las tecnologías de la información y comunicación en algunas universidades del mundo, su aplicativo en los estudiantes y las ventajas que ofrecen para la educación actual y futura. Del mismo de las Universidades de Bogotá, su frecuencia de uso, actividades más comunes a través de esta herramienta, infraestructura para soportar esta herramienta, calidad del servicio, nivel de satisfacción y lo más importante quizás, es conocer si los estudiantes de las 4 principales modo se estudiará el nivel de penetración del Internet en los estudiantes universidades de Bogotá utilizan los medios electrónicos en las mismas instalaciones de las universidades. Para el desarrollo de esta investigación se realizaron encuestas a las directivas y personas encargados de temas afines en cada una de las universidades de estudio, de tal manera, que pudiéramos acercarnos a como se maneja el tema actualmente en las universidades de la capital colombiana, su desarrollo a lo largo del tiempo y su proyección hacia el futuro. Tras analizar los resultados de las entrevistas con directivas de los centros educativos, recopilar y analizar información relacionada al tema del Internet en las universidades, se llega a responder el principal objetivo de la investigación propuesta, el cual se fundamenta en conocer cuál es el perfil del usuario de medios electrónicos en las 4 principales universidades de Bogotá.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El presente trabajo de grado busca optimizar el proceso de Picking en la empresa productora de lentes oftálmicos Compulens y Llanes LTDA, para disminuir tiempos y agilizar la operación. Se utilizarán herramientas vistas a lo largo del pregrado de Administración en Logística y Producción, para realizar un análisis profundo del estado actual del proceso y el Layout de la bodega, para después aplicar herramientas propias de la investigación de operaciones, logística interna, entre otras, y proponer soluciones para conseguir los resultados esperados. Se partirá de un análisis de todos los tipos de bases para lentes oftálmicos que se tallan en la empresa, para plantear mejoras y buenas prácticas en los procesos que se llevan a cabo dentro de la bodega de almacenamiento. Dichas mejoras y recomendaciones tienen como finalidad ayudar a la empresa a cumplir la promesa de servicio a los clientes y evitar el atraso en la entrega de pedidos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wednesday 23rd April 2014 Speaker(s): Willi Hasselbring Organiser: Leslie Carr Time: 23/04/2014 14:00-15:00 Location: B32/3077 File size: 802Mb Abstract The internal behavior of large-scale software systems cannot be determined on the basis of static (e.g., source code) analysis alone. Kieker provides complementary dynamic analysis capabilities, i.e., monitoring/profiling and analyzing a software system's runtime behavior. Application Performance Monitoring is concerned with continuously observing a software system's performance-specific runtime behavior, including analyses like assessing service level compliance or detecting and diagnosing performance problems. Architecture Discovery is concerned with extracting architectural information from an existing software system, including both structural and behavioral aspects like identifying architectural entities (e.g., components and classes) and their interactions (e.g., local or remote procedure calls). In addition to the Architecture Discovery of Java systems, Kieker supports Architecture Discovery for other platforms, including legacy systems, for instance, inplemented in C#, C++, Visual Basic 6, COBOL or Perl. Thanks to Kieker's extensible architecture it is easy to implement and use custom extensions and plugins. Kieker was designed for continuous monitoring in production systems inducing only a very low overhead, which has been evaluated in extensive benchmark experiments. Please, refer to http://kieker-monitoring.net/ for more information.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Las decisiones en el mundo empresarial se basan en el supuesto de la existencia de recursos limitados. Sin importar el tamaño de las compañías, existen procesos cruciales que no pueden ser ejecutados dadas limitaciones de personal, activos físicos, financieros entre otros. Por lo anterior, propuestas como las Teoría de las Restricciones (Theory of Constraints - TOC) y el Manejo en base a Actividades (Activity-Based Management – AMB) se han posicionado como un marco de desarrollo dentro de los procesos administrativos y de producción de las compañías. En el presente proyecto se analizará el caso del proceso de reclamos presentados por los clientes en Salón Profesional, una de las Áreas de Negocio de la empresa Procter & Gamble, con el objetivo de formular una alternativa financieramente viable para mejorar el nivel de servicio en la resolución de reclamos para los clientes de Salón Profesional de Latinoamérica.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La gestión de inventarios es uno de los grandes retos que afrontan las empresas hoy en día, especialmente aquellas que manipulan productos con altas probabilidades de daños y averías, razón por la cual definir políticas o un sistema de gestión de inventarios garantizaría una perdurabilidad y sostenibilidad en el mercado más prolongada. Es por ello que el proyecto planteado a continuación desarrollado en la empresa Diageo, multinacional de consumo masivo del sector licores, está inscrito en la línea de investigación de Gerencia de la Universidad del Rosario. De esta manera, bajo el programa de Áreas funcionales para la dirección que cuenta con un enfoque en perdurabilidad empresarial, la línea de investigación en Gerencia busca generar conocimientos sobre finanzas, mercadeo, operaciones y gestión humana. Por lo anterior, partiendo de la premisa de que una empresa perdurable es aquella que “adecúa su manejo a la intensidad de las condiciones del entorno sectorial y las fuerzas del mercado” (Leal, Guerrero, Rojas, & Rivera, 2011), se hace necesario orientar los recursos y esfuerzos de la empresa hacia una nueva política de inventarios en el portafolio de vinos, de modo que al incrementar el nivel de servicio se afecten positivamente indicadores de rentabilidad y liquidez.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Traditional resource management has had as its main objective the optimization of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The Self-organizing ICT Resource Management (SORMA) project aims at allowing resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA's motivation is to achieve efficient resource utilization by maximizing revenue for resource providers and minimizing the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that the desired quality of service levels meet the expectations of market participants. This paper explains the proposed use of an economically enhanced resource manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximization across multiple service level agreements and provides an application scenario to demonstrate its usefulness and effectiveness. Copyright © 2008 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: The English Improving Access to Psychological Therapies (IAPT) initiative aims to make evidence-based psychological therapies for depression and anxiety disorder more widely available in the National Health Service (NHS). 32 IAPT services based on a stepped care model were established in the first year of the programme. We report on the reliable recovery rates achieved by patients treated in the services and identify predictors of recovery at patient level, service level, and as a function of compliance with National Institute of Health and Care Excellence (NICE) Treatment Guidelines. METHOD: Data from 19,395 patients who were clinical cases at intake, attended at least two sessions, had at least two outcomes scores and had completed their treatment during the period were analysed. Outcome was assessed with the patient health questionnaire depression scale (PHQ-9) and the anxiety scale (GAD-7). RESULTS: Data completeness was high for a routine cohort study. Over 91% of treated patients had paired (pre-post) outcome scores. Overall, 40.3% of patients were reliably recovered at post-treatment, 63.7% showed reliable improvement and 6.6% showed reliable deterioration. Most patients received treatments that were recommended by NICE. When a treatment not recommended by NICE was provided, recovery rates were reduced. Service characteristics that predicted higher reliable recovery rates were: high average number of therapy sessions; higher step-up rates among individuals who started with low intensity treatment; larger services; and a larger proportion of experienced staff. CONCLUSIONS: Compliance with the IAPT clinical model is associated with enhanced rates of reliable recovery.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This graduate study was assigned by Unisys Oy Ab. The purpose of this study was to find tools to monitor and manage servers and objects in a hosting environment and to remotely connect to the managed objects. Better solutions for promised services were also researched. Unisys provides a ServerHotel service to other businesses which do not have time or resources to manage their own network, servers or applications. Contracts are based on a Service Level Agreement where service level is agreed upon according to the customer's needs. These needs have created a demand for management tools. Unisys wanted to find the most appropriate tools for its hosting environment to fulfill the agreed service level with reasonable costs. The theory consists of literary research focusing on general agreements used in the Finnish IT business, different types of monitoring and management tools and the common protocols used inthem. The theory focuses mainly on the central elements of the above mentioned topics and on their positive and negative features. The second part of the study focuses on general hosting agreements and what management tools Unisys has selected for hosting and why. It also gives a more detailed account of the hosting environment and its features in more detail. Because of the results of the study Unisys decided to use Servers Alive to monitor network and MS applications’ services. Cacti was chosen to monitor disk spaces, which gives us an idea of future disk growth. For remote connections the Microsoft’s Remote Desktop tool was the mostappropriate when the connection was tunneled through Secure Shell (SSH). Finding proper tools for the intended purposes with cost-conscious financial resources proved challenging. This study showed that if required, it is possible to build a professional hosting environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVE: To a large extent, people who have suffered a stroke report unmet needs for rehabilitation. The purpose of this study was to explore aspects of rehabilitation provision that potentially contribute to self-reported met needs for rehabilitation 12 months after stroke with consideration also to severity of stroke. METHODS: The participants (n = 173) received care at the stroke units at the Karolinska University Hospital, Sweden. Using a questionnaire, the dependent variable, self-reported met needs for rehabilitation, was collected at 12 months after stroke. The independent variables were four aspects of rehabilitation provision based on data retrieved from registers and structured according to four aspects: amount of rehabilitation, service level (day care rehabilitation, primary care rehabilitation and home-based rehabilitation), operator level (physiotherapist, occupational therapist, speech therapist) and time after stroke onset. Multivariate logistic regression analyses regarding the aspects of rehabilitation were performed for the participants who were divided into three groups based on stroke severity at onset. RESULTS: Participants with moderate/severe stroke who had seen a physiotherapist at least once during each of the 1st, 2nd and 3rd-4th quarters of the first year (OR 8.36, CI 1.40-49.88 P = 0.020) were more likely to report met rehabilitation needs. CONCLUSION: For people with moderate/severe stroke, continuity in rehabilitation (preferably physiotherapy) during the first year after stroke seems to be associated with self-reported met needs for rehabilitation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

MyGrid is an e-Science Grid project that aims to help biologists and bioinformaticians to perform workflow-based in silico experiments, and help them to automate the management of such workflows through personalisation, notification of change and publication of experiments. In this paper, we describe the architecture of myGrid and how it will be used by the scientist. We then show how myGrid can benefit from agents technologies. We have identified three key uses of agent technologies in myGrid: user agents, able to customize and personalise data, agent communication languages offering a generic and portable communication medium, and negotiation allowing multiple distributed entities to reach service level agreements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over the last two decades, the outsourcing of IT services has become a popular topic for many IS researchers. Furthermore, managing IT services (both internally and externally provided) has become an emerging area for academic research, given the criticality of IT services in modern organizations. One of the better known IT service management frameworks is the Information Technology Infrastructure Library (ITIL) framework. While there are many claims made about the relationship between ITIL and IT outsourcing, these claims still need further empirical research. Using data gathered from a preliminary focus group, this study investigates how ITIL impacts recommended practices on the success of IT outsourcing arrangement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background
Lifestyle risk factors like smoking, nutrition, alcohol consumption, and physical inactivity (SNAP) are the main behavioural risk factors for chronic disease. Primary health care is an appropriate setting to address these risk factors in individuals. Generalist community health nurses (GCHNs) are uniquely placed to provide lifestyle interventions as they see clients in their homes over a period of time. The aim of the paper is to examine the impact of a service-level intervention on the risk factor management practices of GCHNs.

Methods
The trial used a quasi-experimental design involving four generalist community nursing services in NSW, Australia. The services were randomly allocated to either an intervention group or control group. Nurses in the intervention group were provided with training and support in the provision of brief lifestyle assessments and interventions. The control group provided usual care. A sample of 129 GCHNs completed surveys at baseline, 6 and 12 months to examine changes in their practices and levels of confidence related to the management of SNAP risk factors. Six semi-structured interviews and four focus groups were conducted among the intervention group to explore the feasibility of incorporating the intervention into everyday practice.

Results

Nurses in the intervention group became more confident in assessment and intervention over the three time points compared to their control group peers. Nurses in the intervention group reported assessing physical activity, weight and nutrition more frequently, as well as providing more brief interventions for physical activity, weight management and smoking cessation. There was little change in referral rates except for an improvement in weight management related referrals. Nurses’ perception of the importance of ‘client and system-related’ barriers to risk factor management diminished over time.

Conclusions
This study shows that the intervention was associated with positive changes in self-reported lifestyle risk factor management practices of GCHNs. Barriers to referral remained. The service model needs to be adapted to sustain these changes and enhance referral.