639 resultados para Service level agreements


Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper focuses on two main areas. We first investigate various aspects of subscription and session Service Level Agreement (SLA) issues such as negotiating and setting up network services with Quality of Service (QoS) and pricing preferences. We then introduce an agent-enhanced service architecture that facilitates these services. A prototype system consisting of real-time agents that represent various network stakeholders was developed. A novel approach is presented where the agent system is allowed to communicate with a simulated network. This allows functional and dynamic behaviour of the network to be investigated under various agent-supported scenarios. This paper also highlights the effects of SLA negotiation and dynamic pricing in a competitive multi-operator networks environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This graduate study was assigned by Unisys Oy Ab. The purpose of this study was to find tools to monitor and manage servers and objects in a hosting environment and to remotely connect to the managed objects. Better solutions for promised services were also researched. Unisys provides a ServerHotel service to other businesses which do not have time or resources to manage their own network, servers or applications. Contracts are based on a Service Level Agreement where service level is agreed upon according to the customer's needs. These needs have created a demand for management tools. Unisys wanted to find the most appropriate tools for its hosting environment to fulfill the agreed service level with reasonable costs. The theory consists of literary research focusing on general agreements used in the Finnish IT business, different types of monitoring and management tools and the common protocols used inthem. The theory focuses mainly on the central elements of the above mentioned topics and on their positive and negative features. The second part of the study focuses on general hosting agreements and what management tools Unisys has selected for hosting and why. It also gives a more detailed account of the hosting environment and its features in more detail. Because of the results of the study Unisys decided to use Servers Alive to monitor network and MS applications’ services. Cacti was chosen to monitor disk spaces, which gives us an idea of future disk growth. For remote connections the Microsoft’s Remote Desktop tool was the mostappropriate when the connection was tunneled through Secure Shell (SSH). Finding proper tools for the intended purposes with cost-conscious financial resources proved challenging. This study showed that if required, it is possible to build a professional hosting environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Content adaptation is used to adapt multimedia content to a version required by users. In the service-oriented scheme, adaptation functions are provided as services by third-party service providers. Clients pay for the consumed services and thus demand service quality. Providers advertise their services; each with varied quality-of-services (QoS). Some of these QoS however, may not be deliverable accordingly during the actual service execution due to heavy load. Thus, the provider should able to determine a current deliverable QoS before the service level agreement (SLA) is settled with the requesters. In this paper, we propose a strategy for service providers to evaluate incoming requests and capable of offering the new QoS to the requests potentially being initially rejected. The proposed strategy takes into account the current server load and requests' priority. We analysed the performance of the proposed strategy in terms of SLA settlement under various conditions. The results indicate that the proposed strategy performs well. © 2014 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Service oriented architecture has been proposed to support collaborations among distributed wireless sensor network (WSN) applications in an open dynamic environment. However, WSNs are resource constraint, and have limited computation abilities, limited communication bandwidth and especially limited energy. Fortunately, sensor nodes in WSNs are usually deployed redundantly, which brings the opportunity to adopt a sleep schedule for balanced energy consumption to extend the network lifetime. Due to miniaturization and energy efficiency, one sensor node can integrate several sense units and support a variety of services. Traditional sleep schedule considers only the constraints from the sensor nodes, can be categorized to a one-layer (i.e., node layer) issue. The service oriented WSNs should resolve the energy optimization issue considering the two-layer constraints, i.e., the sensor nodes layer and service layer. Then, the one-layer energy optimization scheme in previous work is not applicable for service oriented WSNs. Hence, in this paper we propose a sleep schedule with a service coverage guarantee in WSNs. Firstly, by considering the redundancy degree on both the service level and the node level, we can get an accurate redundancy degree of one sensor node. Then, we can adopt fuzzy logic to integrate the redundancy degree, reliability and energy to get a sleep factor. Based on the sleep factor, we furthermore propose the sleep mechanism. The case study and simulation evaluations illustrate the capability of our proposed approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Telecommunications networks have been always expanding and thanks to it, new services have appeared. The old mechanisms for carrying packets have become obsolete due to the new service requirements, which have begun working in real time. Real time traffic requires strict service guarantees. When this traffic is sent through the network, enough resources must be given in order to avoid delays and information losses. When browsing through the Internet and requesting web pages, data must be sent from a server to the user. If during the transmission there is any packet drop, the packet is sent again. For the end user, it does not matter if the webpage loads in one or two seconds more. But if the user is maintaining a conversation with a VoIP program, such as Skype, one or two seconds of delay in the conversation may be catastrophic, and none of them can understand the other. In order to provide support for this new services, the networks have to evolve. For this purpose MPLS and QoS were developed. MPLS is a packet carrying mechanism used in high performance telecommunication networks which directs and carries data using pre-established paths. Now, packets are forwarded on the basis of labels, making this process faster than routing the packets with the IP addresses. MPLS also supports Traffic Engineering (TE). This refers to the process of selecting the best paths for data traffic in order to balance the traffic load between the different links. In a network with multiple paths, routing algorithms calculate the shortest one, and most of the times all traffic is directed through it, causing overload and packet drops, without distributing the packets in the other paths that the network offers and do not have any traffic. But this is not enough in order to provide the real time traffic the guarantees it needs. In fact, those mechanisms improve the network, but they do not make changes in how the traffic is treated. That is why Quality of Service (QoS) was developed. Quality of service is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. Traffic is distributed into different classes and each of them is treated differently, according to its Service Level Agreement (SLA). Traffic with the highest priority will have the preference over lower classes, but this does not mean it will monopolize all the resources. In order to achieve this goal, a set policies are defined to control and alter how the traffic flows. Possibilities are endless, and it depends in how the network must be structured. By using those mechanisms it is possible to provide the necessary guarantees to the real-time traffic, distributing it between categories inside the network and offering the best service for both real time data and non real time data. Las Redes de Telecomunicaciones siempre han estado en expansión y han propiciado la aparición de nuevos servicios. Los viejos mecanismos para transportar paquetes se han quedado obsoletos debido a las exigencias de los nuevos servicios, que han comenzado a operar en tiempo real. El tráfico en tiempo real requiere de unas estrictas garantías de servicio. Cuando este tráfico se envía a través de la red, necesita disponer de suficientes recursos para evitar retrasos y pérdidas de información. Cuando se navega por la red y se solicitan páginas web, los datos viajan desde un servidor hasta el usuario. Si durante la transmisión se pierde algún paquete, éste se vuelve a mandar de nuevo. Para el usuario final, no importa si la página tarda uno o dos segundos más en cargar. Ahora bien, si el usuario está manteniendo una conversación usando algún programa de VoIP (como por ejemplo Skype) uno o dos segundos de retardo en la conversación podrían ser catastróficos, y ninguno de los interlocutores sería capaz de entender al otro. Para poder dar soporte a estos nuevos servicios, las redes deben evolucionar. Para este propósito se han concebido MPLS y QoS MPLS es un mecanismo de transporte de paquetes que se usa en redes de telecomunicaciones de alto rendimiento que dirige y transporta los datos de acuerdo a caminos preestablecidos. Ahora los paquetes se encaminan en función de unas etiquetas, lo cual hace que sea mucho más rápido que encaminar los paquetes usando las direcciones IP. MPLS también soporta Ingeniería de Tráfico (TE). Consiste en seleccionar los mejores caminos para el tráfico de datos con el objetivo de balancear la carga entre los diferentes enlaces. En una red con múltiples caminos, los algoritmos de enrutamiento actuales calculan el camino más corto, y muchas veces el tráfico se dirige sólo por éste, saturando el canal, mientras que otras rutas se quedan completamente desocupadas. Ahora bien, esto no es suficiente para ofrecer al tráfico en tiempo real las garantías que necesita. De hecho, estos mecanismos mejoran la red, pero no realizan cambios a la hora de tratar el tráfico. Por esto es por lo que se ha desarrollado el concepto de Calidad de Servicio (QoS). La calidad de servicio es la capacidad para ofrecer diferentes prioridades a las diferentes aplicaciones, usuarios o flujos de datos, y para garantizar un cierto nivel de rendimiento en un flujo de datos. El tráfico se distribuye en diferentes clases y cada una de ellas se trata de forma diferente, de acuerdo a las especificaciones que se indiquen en su Contrato de Tráfico (SLA). EL tráfico con mayor prioridad tendrá preferencia sobre el resto, pero esto no significa que acapare la totalidad de los recursos. Para poder alcanzar estos objetivos se definen una serie de políticas para controlar y alterar el comportamiento del tráfico. Las posibilidades son inmensas dependiendo de cómo se quiera estructurar la red. Usando estos mecanismos se pueden proporcionar las garantías necesarias al tráfico en tiempo real, distribuyéndolo en categorías dentro de la red y ofreciendo el mejor servicio posible tanto a los datos en tiempo real como a los que no lo son.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multi-national manufacturing companies are often faced with very difficult decisions regarding where and how to cost effectively manufacture products in a global setting. Clearly, they must utilize efficient and responsive manufacturing strategies to reach low cost solutions, but they must also consider the impact of manufacturing and transportation solutions upon their ability to support sales. One important sales consideration is determining how much work in process, in-transit stock, and finished goods to have on hand to support sales at a desired service level. This paper addresses this important consideration through a comprehensive scenario-based simulation approach, including sensitivity analysis on key study parameters. Results indicate that the inventory needs vary considerably for different manufacturing and delivery methods in ways that may not be obvious when using common evaluative tools.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Internet has become a universal communication network tool. It has evolved from a platform that supports best-effort traffic to one that now carries different traffic types including those involving continuous media with quality of service (QoS) requirements. As more services are delivered over the Internet, we face increasing risk to their availability given that malicious attacks on those Internet services continue to increase. Several networks have witnessed denial of service (DoS) and distributed denial of service (DDoS) attacks over the past few years which have disrupted QoS of network services, thereby violating the Service Level Agreement (SLA) between the client and the Internet Service Provider (ISP). Hence DoS or DDoS attacks are major threats to network QoS. In this paper we survey techniques and solutions that have been deployed to thwart DoS and DDoS attacks and we evaluate them in terms of their impact on network QoS for Internet services. We also present vulnerabilities that can be exploited for QoS protocols and also affect QoS if exploited. In addition, we also highlight challenges that still need to be addressed to achieve end-to-end QoS with recently proposed DoS/DDoS solutions. © 2010 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objectives: The Nurse Researcher Project (NRP) was initiated to support development of a nursing research and evidence based practice culture in Cancer Care Services (CCS) in a large tertiary hospital in Australia. The position was established and evaluated to inform future directions in the organisation.---------- Background: The demand for quality cancer care has been expanding over the past decades. Nurses are well placed to make an impact on improving health outcomes of people affected by cancer. At the same time, there is a robust body of literature documenting the barriers to undertaking and utilising research by and for nurses and nursing. A number of strategies have been implemented to address these barriers including a range of staff researcher positions but there is scant attention to evaluating the outcomes of these strategies. The role of nurse researcher has been documented in the literature with the aim to provide support to nurses in the clinical setting. There is, to date, little information in relation to the design, implementation and evaluation of this role.---------- Design: The Donabedian’s model of program evaluation was used to implement and evaluate this initiative.---------- Methods: The ‘NRP’ outlined the steps needed to implement the nurse researcher role in a clinical setting. The steps involved the design of the role, planning for the support system for the role, and evaluation of outcomes of the role over two years.---------- Discussion: This paper proposes an innovative and feasible model to support clinical nursing research which would be relevant to a range of service areas.---------- Conclusion: Nurse researchers are able to play a crucial role in advancing nursing knowledge and facilitating evidence based practice, especially when placed to support a specialised team of nurses at a service level. This role can be implemented through appropriate planning of the position, building a support system and incorporating an evaluation plan.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Creating sustainable urban environments is one of the challenging issues that need a clear vision and implementation strategies involving changes in governmental values and decision making process for local governments. Particularly, internalisation of environmental externalities of daily urban activities (e.g. manufacturing, transportation and so on) has immense importance for which local policies are formulated to provide better living conditions for the people inhabiting urban areas. Even if environmental problems are defined succinctly by various stakeholders, complicated nature of sustainability issues demand a structured evaluation strategy and well-defined sustainability parameters for efficient and effective policy making. Following this reasoning, this study involves assessment of sustainability performance of urban settings mainly focusing on environmental problems caused by rapid urban expansion and transformation. By taking into account land-use and transportation interaction, it tries to reveal how future urban developments would alter daily urban travel behaviour of people and affect the urban and natural environments. The paper introduces a grid-based indexing method developed for this research and trailed as a GIS-based decision support tool to analyse and model selected spatial and aspatial indicators of sustainability in the Gold Coast. This process reveals parameters of site specific relationship among selected indicators that are used to evaluate index-based performance characteristics of the area. The evaluation is made through an embedded decision support module by assigning relative weights to indicators. Resolution of selected grid-based unit of analysis provides insights about service level of projected urban development proposals at a disaggregate level, such as accessibility to transportation and urban services, and pollution. The paper concludes by discussing the findings including the capacity of the decision support system to assist decision-makers in determining problematic areas and developing intervention policies for sustainable outcomes of future developments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Engineering asset management (EAM) is a broad discipline and the EAM functions and processes are characterized by its distributed nature. However, engineering asset nowadays mostly relies on self-maintained experiential rule bases and periodic maintenance, which is lacking a collaborative engineering approach. This research proposes a collaborative environment integrated by a service center with domain expertise such as diagnosis, prognosis, and asset operations. The collaborative maintenance chain combines asset operation sites, service center (i.e., maintenance operation coordinator), system provider, first tier collaborators, and maintenance part suppliers. Meanwhile, to realize the automation of communication and negotiation among organizations, multiagent system (MAS) technique is applied to enhance the entire service level. During the MAS design processes, this research combines Prometheus MAS modeling approach with Petri-net modeling methodology and unified modeling language to visualize and rationalize the design processes of MAS. The major contributions of this research include developing a Petri-net enabled Prometheus MAS modeling methodology and constructing a collaborative agent-based maintenance chain framework for integrated EAM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: Australian Indigenous peoples in remote and rural settings continue to have limited access to treatment for mental illness. Comorbid disorders complicate presentations in primary care where Indigenous youths and perinatal women are at particular risk. Despite this high comorbidity there are few examples of successful models of integrated treatment. This paper outlines these challenges and provides recommendations for practice that derive from recent developments in the Northern Territory. Conclusions: There is a strong need to develop evidence for the effectiveness of integrated and culturally informed individual and service level interventions. We describe the Best practice in Early intervention Assessment and Treatment of depression and substance misuse study which seeks to address this need.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The health system is one sector dealing with a deluge of complex data. Many healthcare organisations struggle to utilise these volumes of health data effectively and efficiently. Also, there are many healthcare organisations, which still have stand-alone systems, not integrated for management of information and decision-making. This shows, there is a need for an effective system to capture, collate and distribute this health data. Therefore, implementing the data warehouse concept in healthcare is potentially one of the solutions to integrate health data. Data warehousing has been used to support business intelligence and decision-making in many other sectors such as the engineering, defence and retail sectors. The research problem that is going to be addressed is, "how can data warehousing assist the decision-making process in healthcare". To address this problem the researcher has narrowed an investigation focusing on a cardiac surgery unit. This research used the cardiac surgery unit at the Prince Charles Hospital (TPCH) as the case study. The cardiac surgery unit at TPCH uses a stand-alone database of patient clinical data, which supports clinical audit, service management and research functions. However, much of the time, the interaction between the cardiac surgery unit information system with other units is minimal. There is a limited and basic two-way interaction with other clinical and administrative databases at TPCH which support decision-making processes. The aims of this research are to investigate what decision-making issues are faced by the healthcare professionals with the current information systems and how decision-making might be improved within this healthcare setting by implementing an aligned data warehouse model or models. As a part of the research the researcher will propose and develop a suitable data warehouse prototype based on the cardiac surgery unit needs and integrating the Intensive Care Unit database, Clinical Costing unit database (Transition II) and Quality and Safety unit database [electronic discharge summary (e-DS)]. The goal is to improve the current decision-making processes. The main objectives of this research are to improve access to integrated clinical and financial data, providing potentially better information for decision-making for both improved from the questionnaire and by referring to the literature, the results indicate a centralised data warehouse model for the cardiac surgery unit at this stage. A centralised data warehouse model addresses current needs and can also be upgraded to an enterprise wide warehouse model or federated data warehouse model as discussed in the many consulted publications. The data warehouse prototype was able to be developed using SAS enterprise data integration studio 4.2 and the data was analysed using SAS enterprise edition 4.3. In the final stage, the data warehouse prototype was evaluated by collecting feedback from the end users. This was achieved by using output created from the data warehouse prototype as examples of the data desired and possible in a data warehouse environment. According to the feedback collected from the end users, implementation of a data warehouse was seen to be a useful tool to inform management options, provide a more complete representation of factors related to a decision scenario and potentially reduce information product development time. However, there are many constraints exist in this research. For example the technical issues such as data incompatibilities, integration of the cardiac surgery database and e-DS database servers and also, Queensland Health information restrictions (Queensland Health information related policies, patient data confidentiality and ethics requirements), limited availability of support from IT technical staff and time restrictions. These factors have influenced the process for the warehouse model development, necessitating an incremental approach. This highlights the presence of many practical barriers to data warehousing and integration at the clinical service level. Limitations included the use of a small convenience sample of survey respondents, and a single site case report study design. As mentioned previously, the proposed data warehouse is a prototype and was developed using only four database repositories. Despite this constraint, the research demonstrates that by implementing a data warehouse at the service level, decision-making is supported and data quality issues related to access and availability can be reduced, providing many benefits. Output reports produced from the data warehouse prototype demonstrated usefulness for the improvement of decision-making in the management of clinical services, and quality and safety monitoring for better clinical care. However, in the future, the centralised model selected can be upgraded to an enterprise wide architecture by integrating with additional hospital units’ databases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Through the rise of cloud computing, on-demand applications, and business networks, services are increasingly being exposed and delivered on the Internet and through mobile communications. So far, services have mainly been described through technical interface descriptions. The description of business details, such as pricing, service-level, or licensing, has been neglected and is therefore hard to automatically process by service consumers. Also, third-party intermediaries, such as brokers, cloud providers, or channel partners, are interested in the business details in order to extend services and their delivery and, thus, further monetize services. In this paper, the constructivist design of the UnifiedServiceDescriptionLanguage (USDL), aimed at describing services across the human-to-automation continuum, is presented. The proposal of USDL follows well-defined requirements which are expressed against a common service discourse and synthesized from currently available servicedescription efforts. USDL's concepts and modules are evaluated for their support of the different requirements and use cases.