849 resultados para Service Programming Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tourism industry is growing rapidly, and thus there is an urgent need to developing sustainable tourism. The research objective of the thesis is to explore and discuss the concept of sustainability within the tourism industry from a marketing point of view, focusing on the perspective of tourist producers’. The thesis consists of four studies, each of which contains different perspectives to support this overall objective. The first study deals with how a hotel can achieve economic sustainability by creating a high level of customer service delivery using a refined GAP-model. The second study examines how tourist producers at mass tourism destinations work with sustainable tourism as a strategic marketing tool in their tourism product development. The third study addresses economic sustainability at the macro level by estimating the tourism demand for Sweden and Norway in five different countries. In the fourth study, the concept of sustainable mass tourism is developed and analyzed from a conceptual standpoint. Study 1 and study 3 concentrate on economic sustainability from a micro and national perspective. The main contribution of Study 1 is the refined GAP-model, which can be seen as a theoretical contribution to the service marketing research. Study 3 shows that exchange rate trends strongly affect tourists’ choice of destination. Study 2 examines sustainable mass tourism as a strategic marketing tool at the destination level. The conclusions of Study 2 contribute to the findings of Study 4 and consider the tourist producers approach to sustainable tourism. One of the contributions of Study 4 is that the concept of sustainable tourism should be divided into three separate parts; economic sustainability, social sustainability and environmental sustainability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Third party logistics, and third party logistics providers and the services they offer have grown substantially in the last twenty years. Even though there has been extensive research on third party logistics providers, and regular industry reviews within the logistics industry, a closer research in the area of partner selection and network models in the third party logistics industry is missing. The perspective taken in this study was of expanding the network research into logistics service providers as the focal firm in the network. The purpose of the study is to analyze partnerships and networks in the third party logistics industry in order to define how networks are utilized in third party logistics markets, what have been the reasons for the partnerships, and whether there are benefits for the third party logistics provider that can be achieved through building networks and partnerships. The theoretical framework of this study was formed based on common theories in studying networks and partnerships in accordance with models of horizontal and vertical partnerships. The theories applied to the framework and context of this study included the strategic network view and the resource-based view. Applying these two network theories to the position and networks of third party logistics providers in an industrial supply chain, a theoretical model for analyzing the horizontal and vertical partnerships where the TPL provider is in focus was structured. The empirical analysis of TPL partnerships consisted of a qualitative document analysis of 33 partnership examples involving companies present in the Finnish TPL markets. For the research, existing documents providing secondary data on types of partnerships, reasons for the partnerships, and outcomes of the partnerships were searched from available online sources. Findings of the study revealed that third party logistics providers are evident in horizontal and vertical interactions varying in geographical coverage and the depth and nature of the relationship. Partnership decisions were found to be made on resource based reasons, as well as from strategic aspects. The discovered results of the partnerships in this study included cost reduction and effectiveness in the partnerships for improving existing services. In addition in partnerships created for innovative service extension, differentiation, and creation of additional value were discovered to have emerged as results of the cooperation. It can be concluded that benefits and competitive advantage can be created through building partnerships in order to expand service offering and seeking synergies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The report presents the results of the commercialization project called the Container logistic services for forest bioenergy. The project promotes new business that is emerging around overall container logistic services in the bioenergy sector. The results assess the European markets of the container logistics for biomass, enablers for new business creation and required service bundles for the concept. We also demonstrate the customer value of the container logistic services for different market segments. The concept analysis is based on concept mapping, quality function deployment process (QFD) and business network analysis. The business network analysis assesses key shareholders and their mutual connections. The performance of the roadside chipping chain is analysed by the logistic cost simulation, RFID system demonstration and freezing tests. The EU has set the renewable energy target to 20 % in 2020 of which Biomass could account for two-thirds. In the Europe, the production of wood fuels was 132.9 million solid-m3 in 2012 and production of wood chips and particles was 69.0 million solidm3. The wood-based chips and particle flows are suitable for container transportation providing market of 180.6 million loose- m3 which mean 4.5 million container loads per year. The intermodal logistics of trucks and trains are promising for the composite containers because the biomass does not freeze onto the inner surfaces in the unloading situations. The overall service concept includes several packages: container rental, container maintenance, terminal services, RFID-tracking service, and simulation and ERP-integration service. The container rental and maintenance would provide transportation entrepreneurs a way to increase the capacity without high investment costs. The RFID-concept would lead to better work planning improving profitability throughout the logistic chain and simulation supports fuel supply optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article reports on the design and characteristics of substrate mimetics in protease-catalyzed reactions. Firstly, the basis of protease-catalyzed peptide synthesis and the general advantages of substrate mimetics over common acyl donor components are described. The binding behavior of these artificial substrates and the mechanism of catalysis are further discussed on the basis of hydrolysis, acyl transfer, protein-ligand docking, and molecular dynamics studies on the trypsin model. The general validity of the substrate mimetic concept is illustrated by the expansion of this strategy to trypsin-like, glutamic acid-specific, and hydrophobic amino acid-specific proteases. Finally, opportunities for the combination of the substrate mimetic strategy with the chemical solid-phase peptide synthesis and the use of substrate mimetics for non-peptide organic amide synthesis are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a literature study that develops a conceptual model of decision making and decision support in service systems. The study is related to the Ä-Logi, Intelligent Service Logic for Welfare Sector Services research project, and the objective of the study is to develop the necessary theoretical framework to enable further research based on the research project results and material. The study first examines the concepts of service and service systems, focusing on understanding the characteristics of service systems and their implications for decision making and decision support to provide the basis for the development of the conceptual model. Based on the identified service system characteristics, an integrated model of service systems is proposed that views service systems through a number of interrelated perspectives that each offer different, but complementary, implications on the nature of decision making and the requirements for decision support in service systems. Based on the model, it is proposed that different types of decision making contexts can be identified in service systems that may be dominated by different types of decision making processes and where different types of decision support may be required, depending on the characteristics of the decision making context and its decision making processes. The proposed conceptual model of decision making and decision support in service systems examines the characteristics of decision making contexts and processes in service systems, and their typical requirements for decision support. First, a characterization of different types of decision making contexts in service systems is proposed based on the Cynefin framework and the identified service system characteristics. Second, the nature of decision making processes in service systems is proposed to be dual, with both rational and naturalistic decision making processes existing in service systems, and having an important and complementary role in decision making in service systems. Finally, a characterization of typical requirements for decision support in service systems is proposed that examines the decision support requirements associated with different types of decision making processes in characteristically different types of decision making contexts. It is proposed that decision support for the decision making processes that are based on rational decision making can be based on organizational decision support models, while decision support for the decision making processes that are based on naturalistic decision making should be based on supporting the decision makers’ situation awareness and facilitating the development of their tacit knowledge of the system and its tasks. Based on the proposed conceptual model a further research process is proposed. The study additionally provides a number of new perspectives on the characteristics of service systems, and the nature of decision making and requirements for decision support in service systems that can potentially provide a basis for further discussion and research, and support the practice alike.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studied the performance of Advanced metering infrastructure systems in a challenging Demand Response environment. The aim was to find out what kind of challenges and bottlenecks could be met when utilizing AMI-systems in challenging Demand Response tasks. To find out the challenges and bottlenecks, a multilayered demand response service concept was formed. The service consists of seven different market layers which consist of Nordic electricity market and the reserve markets of Fingrid. In the simulations the AMI-systems were benchmarked against these seven market layers. It was found out, that the current generation AMI-systems were capable of delivering Demand Response on the most challenging market layers, when observed from time critical viewpoint. Additionally, it was found out, that to enable wide scale Demand Response there are three major challenges to be acknowledged. The challenges hindering the utilization of wide scale Demand Response were related to poor standardization of the systems in use, possible problems in data connectivity solutions and the current electricity market regulation model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports investigations on applying the Service Oriented Architecture (SOA) approach in the engineering of multi-platform and multi-devices user interfaces. This study has three goals: (1) analyze the present frameworks for developing multi-platform and multi-devices applications, (2) extend the principles of SOA for implementing a multi-platform and multi-devices architectural framework (SOA-MDUI), (3) applying and validating the proposed framework in the context of a specific application. One of the problems addressed in this ongoing research is the large amount of combinations for possible implementations of applications on different types of devices. Usually it is necessary to take into account the operating system (OS), user interface (UI) including the appearance, programming language (PL) and architectural style (AS). Our proposed approach extended the principles of SOA using patterns-oriented design and model-driven engineering approaches. Synthesizing the present work done in these domains, this research built and tested an engineering framework linking Model-driven Architecture (MDA) and SOA approaches to developing of UI. This study advances general understanding of engineering, deploying and managing multi-platform and multi-devices user interfaces as a service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, when most of the business are moving forward to sustainability by providing or getting different services from different vendors, Service Level Agreement (SLA) becomes very important for both the business providers/vendors and as well as for users/customers. There are many ways to inform users/customers about various services with its inherent execution functionalities and even non-functional/Quality of Services (QoS) aspects through negotiating, evaluating or monitoring SLAs. However, these traditional SLA actually do not cover eco-efficient green issues or IT ethics issues for sustainability. That is why green SLA (GSLA) should come into play. GSLA is a formal agreement incorporating all the traditional commitments as well as green issues and ethics issues in IT business sectors. GSLA research would survey on different traditional SLA parameters for various services like as network, compute, storage and multimedia in IT business areas. At the same time, this survey could focus on finding the gaps and incorporation of these traditional SLA parameters with green issues for all these mentioned services. This research is mainly points on integration of green parameters in existing SLAs, defining GSLA with new green performance indicators and their measurable units. Finally, a GSLA template could define compiling all the green indicators such as recycling, radio-wave, toxic material usage, obsolescence indication, ICT product life cycles, energy cost etc for sustainable development. Moreover, people’s interaction and IT ethics issues such as security and privacy, user satisfaction, intellectual property right, user reliability, confidentiality etc could also need to add for proposing a new GSLA. However, integration of new and existing performance indicators in the proposed GSLA for sustainable development could be difficult for ICT engineers. Therefore, this research also discovers the management complexity of proposed green SLA through designing a general informational model and analyses of all the relationships, dependencies and effects between various newly identified services under sustainability pillars. However, sustainability could only be achieved through proper implementation of newly proposed GSLA, which largely depends on monitoring the performance of the green indicators. Therefore, this research focuses on monitoring and evaluating phase of GSLA indicators through the interactions with traditional basic SLA indicators, which would help to achieve proper implementation of future GSLA. Finally, this newly proposed GSLA informational model and monitoring aspects could definitely help different service providers/vendors to design their future business strategy in this new transitional sustainable society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study discusses the evolution of an omni-channel model in managing customer experience. The purpose of this thesis is to expand the current academic literature available on omni-channel and offer suggestions for omni-channel creation. This is done by studying the features of an omni-channel approach into engaging with customers and through the sub-objectives of describing the process behind its initiation as well as the special features communication service providers need to take in consideration. Theories used as a background for this study are related to customer experience, channel management, omni-channel and finally change management. The empirical study of this thesis consists of seven expert interviews conducted in a case company. The interviews were held between March and November 2014. One of the interviewees is the manager of an omni-channel development team, whilst the rest were in charge of the management of the various customer channels of the company. The organization and analysis of the interview data was conducted topically. The use of themes related to major theories on the subject was utilized to create linkages between theory and practice. The responses were also organized in two groups based on the viewpoint to map responses related to the company perspective as well as the customers´ perspective. The findings in this study are that omni-channel is among the best tools for companies to respond to the challenge induced by changing customer needs and preferences, as well as intensifying competitive environment. The omni-channel model was found to promote excellent customer experience and thus to be a source of competition advantage and increasing financial returns by creating an omni-experience for the customer. Through omniexperience customers see all of the transactions with a company presenting one brand and providing ease and effortlessness in every encounter. The processes behind omni-channel formulation were identified as customer experience proclaimed as the most important strategic goal, mapping and establishing a unified brand experience in all (service) channels and empowering the first line personnel as the gate keepers of omniexperience. Further the tools, measurement and supporting strategies were to be in accordance with the omni-channel strategy and the customer needs to become a partner in a two way transaction with the firm. Based on these findings a model for omni-channel creation is offered. Future research is needed to firstly, further test these findings and expand the theoretical framework on omni-channel, as it is quite scarce to date and secondly, to increase the generalizability of the model suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internet of Things (IoT) technologies are developing rapidly, and therefore there exist several standards of interconnection protocols and platforms. The existence of heterogeneous protocols and platforms has become a critical challenge for IoT system developers. To mitigate this challenge, few alliances and organizations have taken the initiative to build a framework that helps to integrate application silos. Some of these frameworks focus only on a specific domain like home automation. However, the resource constraints in the large proportion of connected devices make it difficult to build an interoperable system using such frameworks. Therefore, a general purpose, lightweight interoperability framework that can be used for a range of devices is required. To tackle the heterogeneous nature, this work introduces an embedded, distributed and lightweight service bus, Lightweight IoT Service bus Architecture (LISA), which fits inside the network stack of a small real-time operating system for constrained nodes. LISA provides a uniform application programming interface for an IoT system on a range of devices with variable resource constraints. It hides platform and protocol variations underneath it, thus facilitating interoperability in IoT implementations. LISA is inspired by the Network on Terminal Architecture, a service centric open architecture by Nokia Research Center. Unlike many other interoperability frameworks, LISA is designed specifically for resource constrained nodes and it provides essential features of a service bus for easy service oriented architecture implementation. The presented architecture utilizes an intermediate computing layer, a Fog layer, between the small nodes and the cloud, thereby facilitating the federation of constrained nodes into subnetworks. As a result of a modular and distributed design, the part of LISA running in the Fog layer handles the heavy lifting to assist the lightweight portion of LISA inside the resource constrained nodes. Furthermore, LISA introduces a new networking paradigm, Node Centric Networking, to route messages across protocol boundaries to facilitate interoperability. This thesis presents a concept implementation of the architecture and creates a foundation for future extension towards a comprehensive interoperability framework for IoT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digitalization has been predicted to change the future as a growing range of non-routine tasks will be automated, offering new kinds of business models for enterprises. Serviceoriented architecture (SOA) provides a basis for designing and implementing welldefined problems as reusable services, allowing computers to execute them. Serviceoriented design has potential to act as a mediator between IT and human resources, but enterprises struggle with their SOA adoption and lack a linkage between the benefits and costs of services. This thesis studies the phenomenon of service reuse in enterprises, proposing an ontology to link different kinds of services with their role conceptually as a part of the business model. The proposed ontology has been created on the basis of qualitative research conducted in three large enterprises. Service reuse has two roles in enterprises: it enables automated data sharing among human and IT resources, and it may provide cost savings in service development and operations. From a technical viewpoint, the ability to define a business problem as a service is one of the key enablers for achieving service reuse. The research proposes two service identification methods, first to identify prospective services in the existing documentation of the enterprise and secondly to model the services from a functional viewpoint, supporting service identification sessions with business stakeholders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The importance of industrial maintenance has been emphasized during the last decades; it is no longer a mere cost item, but one of the mainstays of business. Market conditions have worsened lately, investments in production assets have decreased, and at the same time competition has changed from taking place between companies to competition between networks. Companies have focused on their core functions and outsourced support services, like maintenance, above all to decrease costs. This new phenomenon has led to increasing formation of business networks. As a result, a growing need for new kinds of tools for managing these networks effectively has arisen. Maintenance costs are usually a notable part of the life-cycle costs of an item, and it is important to be able to plan the future maintenance operations for the strategic period of the company or for the whole life-cycle period of the item. This thesis introduces an itemlevel life-cycle model (LCM) for industrial maintenance networks. The term item is used as a common definition for a part, a component, a piece of equipment etc. The constructed LCM is a working tool for a maintenance network (consisting of customer companies that buy maintenance services and various supplier companies). Each network member is able to input their own cost and profit data related to the maintenance services of one item. As a result, the model calculates the net present values of maintenance costs and profits and presents them from the points of view of all the network members. The thesis indicates that previous LCMs for calculating maintenance costs have often been very case-specific, suitable only for the item in question, and they have also been constructed for the needs of a single company, without the network perspective. The developed LCM is a proper tool for the decision making of maintenance services in the network environment; it enables analysing the past and making scenarios for the future, and offers choices between alternative maintenance operations. The LCM is also suitable for small companies in building active networks to offer outsourcing services for large companies. The research introduces also a five-step constructing process for designing a life-cycle costing model in the network environment. This five-step designing process defines model components and structure throughout the iteration and exploitation of user feedback. The same method can be followed to develop other models. The thesis contributes to the literature of value and value elements of maintenance services. It examines the value of maintenance services from the perspective of different maintenance network members and presents established value element lists for the customer and the service provider. These value element lists enable making value visible in the maintenance operations of a networked business. The LCM added with value thinking promotes the notion of maintenance from a “cost maker” towards a “value creator”.