908 resultados para Service systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The maturation of the cruise industry has led to increased competition which demands more efficient operations. Systems engineering, a discipline that studies complex organizations of material, people, and information, is traditionally only applied in the manufacturing sector; however, it can make significant contributions to service industries such as the cruise industry. The author describes this type of engineering, explores how it can be applied to the cruise industry, and presents two case studies demonstrating applications to the cruise industry luggage delivery process and the information technology help desk process. The results show that this approach can make the processes more productive and enhance profitability for the cruise lines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In their study - From Clerk and Cashier to Guest Service Agent - by Nancy J. Allin, Director of Quality Assurance and Training and Kelly Halpine, Assistant Director of Quality Assurance and Training, The Waldorf-Astoria, New York, the authors state at the outset: “The Waldorf-Astoria has taken the positions of registration clerk and cashier and combined them to provide excellent guest service and efficient systems operation. The authors tell how and why the combination works. That thesis statement defines the article, and puts it squarely in the crosshairs of the service category. Allin and Halpine use their positions at the Waldorf-Astoria in New York City to frame their observations “The allocation of staff hours has been a challenge to many front office managers who try their hardest to schedule for the norm but provide excellent, efficient service throughout the peaks,” Allin and Halpine allude. “…the decision [to combine the positions of registration clerk and cashier] was driven by a desire to improve guest service where its impact is most obvious, at the front desk. Cross-trained employees speed the check-in and check-out process by performing both functions, as the traffic at the desk dictates,” the authors say. Making such a move has resulted in positive benefits for both the guests and the hotel. “Benefits to the hotel, in addition to those brought to bear by increased guest satisfaction, include greater flexibility in weekly scheduling and in granting vacations while maintaining adequate staffing at the desk,” say Allin and Halpine . “Another expected outcome, net payroll savings, should also be realized as a consequence of the ability to schedule more efficiently.” The authors point to communication as the key to designing a successful combination such as this, with the least amount of service disruption. They bullet-point what that communication should entail. Issues of seniority, wage and salary rates, organizational charting, filing, scheduling, possible probationary periods, position titles, and physical layouts are all discussed. “It is critical that each of the management issues be addressed and resolved before any training is begun,” Allin and Halpine suggest. “Unresolved issues project confusion and lack of conviction to line employees and the result is frustration and a lack of commitment to the combination process,” they push the thought Allin and Halpine insist: “Once begun, training must be ongoing and consistent.” In the practical sense, the authors provide that authorizing overtime is helpful in accomplishing training. “Training must address the fact that employees will be faced with guest situations which are new to them, for example: an employee previously functioning as a cashier will be faced with walking guests. Specific exercises should be included to address these needs,” say the authors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bonded repair of concrete structures with fiber reinforced polymer (FRP) systems is increasingly being accepted as a cost-efficient and structurally viable method of rapid rehabilitation of concrete structures. However, the relationships between long-term performance attributes, service-life, and details of the installation process are not easy to quantify. Accordingly, there is currently a lack of generally accepted construction specifications, making it difficult for the field engineer to certify the adequacy of the construction process. ^ The objective of the present study, as part of the National Cooperative Highway Research Program (NCHRP) Project 10-59B, was to investigate the effect of surface preparation on the behavior of wet lay-up FRP repair systems and consequently develop rational thresholds that provide sufficient performance. ^ The research program was comprised of both experimental and analytical work for wet lay-up FRP applications. The experimental work included flexure testing of sixty-seven (67) reinforced concrete beams and bond testing of ten (10) reinforced concrete blocks. Four different parameters were studied: surface roughness, surface flatness, surface voids and bug holes, and surface cracks/cuts. The findings were analyzed from various aspects and compared with the data available in the literature. As part of the analytical work, finite element models of the flexural specimens with surface flaws were developed using ANSYS. The purpose of this part was to extend the parametric study on the effects of concrete surface flaws and verify the experimental results based on nonlinear finite element analysis. ^ Test results showed that surface roughness does not appear to have a significant influence on the overall performance of the wet lay-up FRP systems with or without adequate anchorage, and whether failure was by debonding or rupture of FRP. Both experimental and analytical results for surface flatness proved that peaks on concrete surface, in the range studied, do not have a significant effect on the performance of wet lay-up FRP systems. However, valleys of particular size could reduce the strength of wet lay-up FRP systems. Test results regarding surface voids and surface cracks/cuts revealed that previously suggested thresholds for these flaws appear to be conservative, as also confirmed by analytical study. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. ^ In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project on Policy Solutions and International Perspectives on the Funding of Public Service Media Content for Children began on 8 February 2016 and concludes on 31 May 2016. Its outcomes contribute to the policy-making process around BBC Charter Review, which has raised concerns about the financial sustainability of UK-produced children’s screen content. The aim of this project is to evaluate different funding possibilities for public service children’s content in a more challenging and competitive multiplatform media environment, drawing on experiences outside the UK. The project addresses the following questions: • What forms of alternative funding exist to support public service content for children in a transforming multiplatform media environment? • What can we learn from the types of funding and support for children’s screen content that are available elsewhere in the world – in terms of regulatory foundations, administration, accountability, levels of funding, amounts and types of content supported? • How effective are these funding systems and funding sources for supporting domestically produced content (range and numbers of projects supported; audience reach)? This stakeholder report constitutes the main outcome of the project and provides an overview and analysis of alternatives for supporting and funding home-grown children’s screen content across both traditional broadcasting outlets and emerging digital platforms. The report has been made publicly available, so that it can inform policy work and responses to the UK Government White Paper, A BBC for the Future, published by the Department of Culture, Media and Sport in May 2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Reputation, influenced by ratings from past clients, is crucial for providers competing for custom. For new providers with less track record, a few negative ratings can harm their chances of growing. In the JASPR project, we aim to look at how to ensure automated reputation assessments are justified and informative. Even an honest balanced review of a service provision may still be an unreliable predictor of future performance if the circumstances differ. For example, a service may have previously relied on different sub-providers to now, or been affected by season-specific weather events. A common way to ameliorate the ratings that may not reflect future performance is by weighting by recency. We argue that better results are obtained by querying provenance records on how services are provided for the circumstances of provision, to determine the significance of past interactions. Informed by case studies in global logistics, taxi hire, and courtesy car leasing, we are going on to explore the generation of explanations for reputation assessments, which can be valuable both for clients and for providers wishing to improve their match to the market, and applying machine learning to predict aspects of service provision which may influence decisions on the appropriateness of a provider. In this talk, I will give an overview of the research conducted and planned on JASPR. Speaker Biography Dr Simon Miles Simon Miles is a Reader in Computer Science at King's College London, UK, and head of the Agents and Intelligent Systems group. He conducts research in the areas of normative systems, data provenance, and medical informatics at King's, and has published widely and manages a number of research projects in these areas. He was previously a researcher at the University of Southampton after graduating from his PhD at Warwick. He has twice been an organising committee member for the Autonomous Agents and Multi-Agent Systems conference series, and was a member of the W3C working group which published standards on interoperable provenance data in 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are two types of work typically performed in services which differ in the degree of control management has over when the work must be done. Serving customers, an activity that can occur only when customers are in the system is, by its nature, uncontrollable work. In contrast, the execution of controllable work does not require the presence of customers, and is work over which management has some degree of temporal control. This paper presents two integer programming models for optimally scheduling controllable work simultaneously with shifts. One model explicitly defines variables for the times at which controllable work may be started, while the other uses implicit modeling to reduce the number of variables. In an initial experiment of 864 test problems, the latter model yielded optimal solutions in approximately 81 percent of the time required by the former model. To evaluate the impact on customer service of having front-line employees perform controllable work, a second experiment was conducted simulating 5,832 service delivery systems. The results show that controllable work offers a useful means of improving labor utilization. Perhaps more important, it was found that having front-line employees perform controllable work did not degrade the desired level of customer service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of tabletop technology continues to grow in the restaurant industry, and this study identifies the strengths and weakness of the technology, how it influences customers, and how it can improve the bottom line for managers and business owners. Results from two studies involving a full-service casual dining chain show that dining time was significantly reduced among patrons who used the tabletop hardware to order or pay for their meals, as was the time required for servers to meet the needs of customers. Also, those who used the devices to order a meal tended to spend more than those who did not. Patrons across the industry have embraced guest-facing technology, such as online reservation systems, mobile apps, payment apps, and tablet-based systems, and may in fact look for such technology when deciding where to dine. Guests’ reactions have been overwhelmingly positive, with 70 to 80 percent of consumers citing the benefits of guest-facing technology and applications. The introduction of tabletop technology in the full-service segment has been slower than in quick-service restaurants (QSRs), and guests cite online reservation systems, online ordering, and tableside payment as preferred technologies. Restaurant operators have also cited benefits of guest-facing technology, for example, the use of electronic ordering, which led to increased sales as such systems can induce the purchase of more expensive menu items and side dishes while allowing managers to store order and payment information for future transactions. Researchers have also noted the cost of the technology and potential problems with integration into other systems as two main factors blocking adoption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation researcher Edward Deci has suggested that if we want behavioural change to be sustainable, we have to move past thinking of motivation as something that we ‘do’ to other people and see it rather as something that we as Service Designers can enable service users to ‘do’ by themselves. In this article, Fergus Bisset explores the ways in which Service Designers can create more motivating services. Dan Lockton then looks at where motivating behaviour via Service Design often starts, with the basic ‘pinball’ and ‘shortcut’ approaches. We conclude by proposing that if services are to be sustainable in the long term, we as Service Designers need to strive to accommodate humans' differing levels of motivation and encourage and support service users' sense of autonomy within the services we design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heterogeneity has to be taken into account when integrating a set of existing information sources into a distributed information system that are nowadays often based on Service- Oriented Architectures (SOA). This is also particularly applicable to distributed services such as event monitoring, which are useful in the context of Event Driven Architectures (EDA) and Complex Event Processing (CEP). Web services deal with this heterogeneity at a technical level, also providing little support for event processing. Our central thesis is that such a fully generic solution cannot provide complete support for event monitoring; instead, source specific semantics such as certain event types or support for certain event monitoring techniques have to be taken into account. Our core result is the design of a configurable event monitoring (Web) service that allows us to trade genericity for the exploitation of source specific characteristics. It thus delivers results for the areas of SOA, Web services, CEP and EDA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing pressure to increase the quality of health services, as well as reducing costs, has caused healthcare organizations to increase the use of Information and Communication Technologies (ICT) through the development and adoption of Healthcare Information Systems (HIS). However, the need for exchange of information between HIS and between organizations has also increased, resulting in the problem of interoperability. This problem is considered complex, but the use of Service Oriented Architecture (SOA) appears as a good way to address this issue. This paper presents a systematic review, performed in order to find out how and in which contexts SOA is being used to ensure the interoperability of HIS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cover title.