929 resultados para Service level agreement
Resumo:
A special inventory problem is presented: aircraft spares that are repaired and returned to spares, called rotable inventory. Rotable inventory is not consumed so does not change in the medium term, but is rotated through operational, maintenance and stock phases. The objective for inventory performance is fleet Service Level (SL), which effects aircraft dispatch performance. A model is proposed where the fleet SL drives combined stock levels such that cost is optimized. By holding greater numbers of lower-cost items and holding lower levels of more expensive items, it is possible to achieve substantial cost savings while maintaining performance. This approach is shown to be an advance over the current literature and is tested with case data, with conclusive results.
Resumo:
This paper explores demand and production management challenges in the food processing industry. The goal is to identify the main production planning constraints and secondly to explore how each of these constraints affects company’s performance in terms of costs and customer service level. A single case study methodology was preferred since it enabled the collection of in-depth data. Findings suggest that product shelf life, carcass utilization and production lead time are the main constraints affecting supply chain efficiency and hence, a single planning approach is not appropriate when different products have different technological and processing characteristics.
Resumo:
Purpose – The purpose of this research is to develop a holistic approach to maximize the customer service level while minimizing the logistics cost by using an integrated multiple criteria decision making (MCDM) method for the contemporary transshipment problem. Unlike the prevalent optimization techniques, this paper proposes an integrated approach which considers both quantitative and qualitative factors in order to maximize the benefits of service deliverers and customers under uncertain environments. Design/methodology/approach – This paper proposes a fuzzy-based integer linear programming model, based on the existing literature and validated with an example case. The model integrates the developed fuzzy modification of the analytic hierarchy process (FAHP), and solves the multi-criteria transshipment problem. Findings – This paper provides several novel insights about how to transform a company from a cost-based model to a service-dominated model by using an integrated MCDM method. It suggests that the contemporary customer-driven supply chain remains and increases its competitiveness from two aspects: optimizing the cost and providing the best service simultaneously. Research limitations/implications – This research used one illustrative industry case to exemplify the developed method. Considering the generalization of the research findings and the complexity of the transshipment service network, more cases across multiple industries are necessary to further enhance the validity of the research output. Practical implications – The paper includes implications for the evaluation and selection of transshipment service suppliers, the construction of optimal transshipment network as well as managing the network. Originality/value – The major advantages of this generic approach are that both quantitative and qualitative factors under fuzzy environment are considered simultaneously and also the viewpoints of service deliverers and customers are focused. Therefore, it is believed that it is useful and applicable for the transshipment service network design.
Resumo:
Despite the importance of information and communication technology (ICT) in the management of transport and logistics systems, there is a shortage of studies in the road freight haulage sector. This paper is aimed at filling this void through an exploratory survey on ICT adoption and the influencing factors carried out in the Italian road transport market. The paper provides a review of the previous research on this topic that allows the identification of research gaps that have been addressed through a questionnaire survey. The findings provide evidence of a passive stance on ICT usage characterised by the adoption of isolated applications. The financial risk associated with technology investment and human resources are the main barriers to ICT adoption, while the improvement of service level and the reliability of transport operations emerge as stimulating factors. The results suggest that the potential benefits of technology have not been fully exploited and a risk-sensitive stance on ICT is evident preventing the full incorporation of ICT into business processes.
Resumo:
The enormous potential of cloud computing for improved and cost-effective service has generated unprecedented interest in its adoption. However, a potential cloud user faces numerous risks regarding service requirements, cost implications of failure and uncertainty about cloud providers' ability to meet service level agreements. These risks hinder the adoption of cloud. We extend the work on goal-oriented requirements engineering (GORE) and obstacles for informing the adoption process. We argue that obstacles prioritisation and their resolution is core to mitigating risks in the adoption process. We propose a novel systematic method for prioritising obstacles and their resolution tactics using Analytical Hierarchy Process (AHP). We provide an example to demonstrate the applicability and effectiveness of the approach. To assess the AHP choice of the resolution tactics we support the method by stability and sensitivity analysis. Copyright 2014 ACM.
Resumo:
A fejlett társadalmak egészségügyi szolgáltató rendszerei napjainkban kettős kihívással néznek szembe: miközben a társadalom a szolgáltatási színvonal emelkedését, a hibák számának a csökkenését várja el, addig a költségvetési terhek miatt a költségcsökkentés is feltétlenül szükséges. Ez a kihívás nagyságában összevethető azzal, amellyel az USA autóipara nézett szembe az 1970-es évektől. A megoldást az autóipar esetében a konkurens „lean” menedzsment elvek és eszközök megértése és alkalmazása jelentette. A tanulmány arra keresi a választ, hogy vajon lehetséges-e ennek a megoldásnak az alkalmazása az egészségügy esetében is. A cikk az egészségügy problémájának bemutatása után tárgyalja a lean menedzsment kialakulását és hogy milyen módon került köztudatba. A tanulmány második felében a szakirodalomban fellelhető, a témával kapcsolatos tapasztalatokat foglalja össze, majd levonja a következtetéseket. = In developed societies healthcare service systems are facing double challenge; society expects service level to rise and the number of mistakes to drop, but at the same time, because of the overloaded budgets, cutting cost is also absolutely necessary. This challenge compares to the one the US automotive industry was facing in the 1970-s. In case of the automotive industry the solution was the comprehension and application of the principles and the tools of lean management. This study aims to answer the question whether it is possible to apply this solution also in the case of the healthcare system. The article first introduces the problems in the healthcare system, than describes the formation of lean management concept and its wide spread. The second half of the study summarizes the available knowledge in the literature and drives conclusions.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^
Resumo:
Winner of a best paper award.
Resumo:
Postprint
Resumo:
Supply chain operations directly affect service levels. Decision on amendment of facilities is generally decided based on overall cost, leaving out the efficiency of each unit. Decomposing the supply chain superstructure, efficiency analysis of the facilities (warehouses or distribution centers) that serve customers can be easily implemented. With the proposed algorithm, the selection of a facility is based on service level maximization and not just cost minimization as this analysis filters all the feasible solutions utilizing Data Envelopment Analysis (DEA) technique. Through multiple iterations, solutions are filtered via DEA and only the efficient ones are selected leading to cost minimization. In this work, the problem of optimal supply chain networks design is addressed based on a DEA based algorithm. A Branch and Efficiency (B&E) algorithm is deployed for the solution of this problem. Based on this DEA approach, each solution (potentially installed warehouse, plant etc) is treated as a Decision Making Unit, thus is characterized by inputs and outputs. The algorithm through additional constraints named “efficiency cuts”, selects only efficient solutions providing better objective function values. The applicability of the proposed algorithm is demonstrated through illustrative examples.
Resumo:
Motivated by new and innovative rental business models, this paper develops a novel discrete-time model of a rental operation with random loss of inventory due to customer use. The inventory level is chosen before the start of a finite rental season, and customers not immediately served are lost. Our analysis framework uses stochastic comparisons of sample paths to derive structural results that hold under good generality for demands, rental durations, and rental unit lifetimes. Considering different \recirculation" rules | i.e., which rental unit to choose to meet each demand | we prove the concavity of the expected profit function and identify the optimal recirculation rule. A numerical study clarifies when considering rental unit loss and recirculation rules matters most for the inventory decision: Accounting for rental unit loss can increase the expected profit by 7% for a single season and becomes even more important as the time horizon lengthens. We also observe that the optimal inventory level in response to increasing loss probability is non-monotonic. Finally, we show that choosing the optimal recirculation rule over another simple policy allows more rental units to be profitably added, and the profit-maximizing service level increases by up to 6 percentage points.
Resumo:
Mobile network coverage is traditionally provided by outdoor macro base stations, which have a long range and serve several of customers. Due to modern passive houses and tightening construction legislation, mobile network service is deteriorated in many indoor locations. Typically, solutions for indoor coverage problem are expensive and demand actions from the mobile operator. Due to these, superior solutions are constantly researched. The solution presented in this thesis is based on Small Cell technology. Small Cells are low power access nodes designed to provide voice and data services.. This thesis concentrates on a specific Small Cell solution, which is called a Pico Cell. The problem regarding Pico Cells and Small Cells in general is that they are a new technological solution for the mobile operator, and the possible problem sources and incidents are not properly mapped. The purpose of this thesis is to figure out the possible problems in the Pico Cell deployment and how they could be solved within the operator’s incident management process. The research in the thesis is carried out with a literature research and a case study. The possible problems are investigated through lab testing. Pico Cell automated deployment process was tested in the lab environment and its proper functionality is confirmed. The related network elements were also tested and examined, and the emerged problems are resolvable. Operators existing incident management process can be used for Pico Cell troubleshooting with minor updates. Certain pre-requirements have to be met before Pico Cell deployment can be considered. The main contribution of this thesis is the Pico Cell integrated incident management process. The presented solution works in theory and solves the problems found during the lab testing. The limitations in the customer service level were solved by adding the necessary tools and by designing a working question pattern. Process structures for automated network discovery and pico specific radio parameter planning were also added for the mobile network management layer..
Resumo:
Le contenu de ce mémoire traite du problème de gestion des stocks dans un réseau constitué de plusieurs sites de stockage de produits. Chaque site i gère son stock de manière autonome pour satisfaire une demande déterministe sur un horizon donné. Un stock maximum Si est tenu à chaque site i. Lorsque le point de commande si est atteint, une commande de taille Qi est placée au centre de distribution qui alimente tous les sites. Qi est telle que Qi = Si - si. La quantité Qi est livrée dans un délai connu Li. Si, à un instant donné, la demande Di au site i excède la quantité en main, le site i fait appel à un ou à plusieurs autres sites du réseau pour le transfert d’une quantité Xji (j = 1, 2, …, n). Ce transfert s’effectue selon un certain nombre de règles de jeu qui tiennent compte des coûts de transfert, de stockage, d’approvisionnement et de pénurie. Ce mémoire examine six principales contributions publiées dans la littérature pour évaluer les contributions d’un modèle collaboratif aux performances, en termes de coûts et de niveau de service, de chaque site du réseau. Cette investigation se limite à une configuration du réseau à deux échelons : un entrepôt central et n (n > 2) sites de stockage. Le cas des pièces de rechange, caractérisé par une demande aléatoire, est examiné dans trois chapitres de ce mémoire. Une autre application de ces stratégies à la collaboration entre n centres hospitaliers (n > 2) est également examinée dans ce travail.
Resumo:
Part 18: Optimization in Collaborative Networks
Resumo:
Background Some studies have reported a ceiling effect in EQ-5D-3L, especially in healthy and/or young individuals. Recently, two further levels have been included in its measurement model (EQ-5D-5L). The purposes of this study were (1) to assess the properties of the EQ-5D-5L in comparison with the standard EQ-5D-3L in a sample of young adults, (2) to foreground the importance of collecting qualitative data to confirm, validate or refine the EQ-5D questionnaire items and (3) to raise questions pertaining to the wording in these questionnaire items. Methods The data used came from a sample of respondents aged 30 or under (n = 624). They completed both versions of the EQ-5D, which were compared in terms of feasibility, level of inconsistency and ceiling effect. Agreement between the instruments was assessed using correlation coefficients and Bland-Altman plots. Known-groups validity of the EQ-5D-5L was also assessed using non-parametric tests. The discriminative properties were compared using receiver operating characteristic curves. Finally, four interviews were conducted for retrospective reports to elicit respondents’ understanding and perceptions of the format, instructions, items, and responses. Results Quantitative results show a ceiling effect reduction of 25.3 % and a high level agreement between both indices. Known-groups validity was confirmed for the EQ-5D-5L. Explorative interviews indicated ambiguity and low degree of certainty in regards to conceptualizing differences between levels moderate-slight across three dimensions. Conclusions The EQ-5D-5L performed better than the EQ-5D-3L. However, the explorative interviews demonstrated several limitations in the EQ-5D questionnaire wording and high context-dependent answers point to lack of illnesses’ experience amongst young adults.