6 resultados para Service-dominant Logic

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vargo and Lusch propose a very exciting framework that aims in expanding the boundaries of the marketing discipline by moving away from the existing exchange paradigm towards a Service Dominant (S-D) logic. This new S-D logic has the potential to strengthen the theoretical grounds of marketing by establishing links to other disciplines. This commentary attempts to discuss some aspects of the foundational premises of the S-D logic from the perspective of the MC21 group with special emphases on innovation, value creation, and resource allocation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From a Service-Dominant Logic (S-DL) perspective, employees constitute operant resources that firms can draw to enhance the outcomes of innovation efforts. While research acknowledges that frontline employees (FLEs) constitute, through service encounters, a key interface for the transfer of valuable external knowledge into the firm, the range of potential benefits derived from FLE-driven innovation deserves more investigation. Using a sample of knowledge intensive business services firms (KIBS), this study examines how the collaboration with FLEs along the new service development (NSD) process, namely FLE co-creation, impacts on service innovation performance following two routes of different effects. Partial least squares structural equation modeling (PLS-SEM) results indicate that FLE co-creation benefits the NS success among FLEs and firm’s customers, the constituents of the resources route. FLE co-creation also has a positive effect on the NSD speed, which in turn enhances the NS quality. NSD speed and NS quality integrate the operational route, which proves to be the most effective path to impact the NS market performance. Accordingly, KIBS managers must value their FLEs as essential partners to achieve successful innovation from an internal and external perspective, and develop the appropriate mechanisms to guarantee their effective involvement along the NSD process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service-based systems that are dynamically composed at run time to provide complex, adaptive functionality are currently one of the main development paradigms in software engineering. However, the Quality of Service (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimisation of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analysed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability- and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis makes a contribution to the Change Data Capture (CDC) field by providing an empirical evaluation on the performance of CDC architectures in the context of realtime data warehousing. CDC is a mechanism for providing data warehouse architectures with fresh data from Online Transaction Processing (OLTP) databases. There are two types of CDC architectures, pull architectures and push architectures. There is exiguous data on the performance of CDC architectures in a real-time environment. Performance data is required to determine the real-time viability of the two architectures. We propose that push CDC architectures are optimal for real-time CDC. However, push CDC architectures are seldom implemented because they are highly intrusive towards existing systems and arduous to maintain. As part of our contribution, we pragmatically develop a service based push CDC solution, which addresses the issues of intrusiveness and maintainability. Our solution uses Data Access Services (DAS) to decouple CDC logic from the applications. A requirement for the DAS is to place minimal overhead on a transaction in an OLTP environment. We synthesize DAS literature and pragmatically develop DAS that eciently execute transactions in an OLTP environment. Essentially we develop effeicient RESTful DAS, which expose Transactions As A Resource (TAAR). We evaluate the TAAR solution and three pull CDC mechanisms in a real-time environment, using the industry recognised TPC-C benchmark. The optimal CDC mechanism in a real-time environment, will capture change data with minimal latency and will have a negligible affect on the database's transactional throughput. Capture latency is the time it takes a CDC mechanism to capture a data change that has been applied to an OLTP database. A standard definition for capture latency and how to measure it does not exist in the field. We create this definition and extend the TPC-C benchmark to make the capture latency measurement. The results from our evaluation show that pull CDC is capable of real-time CDC at low levels of user concurrency. However, as the level of user concurrency scales upwards, pull CDC has a significant impact on the database's transaction rate, which affirms the theory that pull CDC architectures are not viable in a real-time architecture. TAAR CDC on the other hand is capable of real-time CDC, and places a minimal overhead on the transaction rate, although this performance is at the expense of CPU resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In view of the increasingly complexity of services logic and functional requirements, a new system architecture based on SOA was proposed for the equipment remote monitoring and diagnosis system. According to the design principles of SOA, different levels and different granularities of services logic and functional requirements for remote monitoring and diagnosis system were divided, and a loosely coupled web services system was built. The design and implementation schedule of core function modules for the proposed architecture were presented. A demo system was used to validate the feasibility of the proposed architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service supply chain (SSC) has attracted more and more attention from academia and industry. Although there exists extensive product-based supply chain management models and methods, they are not applicable to the SSC as the differences between service and product. Besides, the existing supply chain management models and methods possess some common deficiencies. Because of the above reasons, this paper develops a novel value-oriented model for the management of SSC using the modeling methods of E3-value and Use Case Maps (UCMs). This model can not only resolve the problems of applicability and effectiveness of the existing supply chain management models and methods, but also answer the questions of ‘why the management model is this?’ and ‘how to quantify the potential profitability of the supply chains?’. Meanwhile, the service business processes of SSC system can be established using its logic procedure. In addition, the model can also determine the value and benefits distribution of the entire service value chain and optimize the operations management performance of the service supply.