769 resultados para customer analytics
Resumo:
In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.
Resumo:
Skepticism of promised value-added is forcing suppliers to provide tangible evidence of the value they can deliver for the customers in industrial markets. Despite this, quantifying customer benefits is being thought as one of the most difficult part in business-to-business selling. The objective of this research is to identify the desired and perceived customer benefits of KONE JumpLift™ and improve the overall customer value quantification and selling process of the solution. The study was conducted with a qualitative case analysis including 7 interviews with key stakeholders from three different market areas. The market areas were chosen based on where the offering has been utilized and the research was conducted by five telephone and two email interviews. The main desired and perceived benefits include many different values for example economical, functional, symbolic and epistemic value but they vary on studied market areas. The most important result of the research was finding the biggest challenges of selling the offering which are communicating and proving the potential value to the customers. In addition, the sales arguments have different relative importance in studied market areas which create challenges for salespeople to sell the offering effectively. In managerial level this means need for investing into a new sales tool and training the salespeople.
Resumo:
Previously conducted research projects in the field of logistics services have emphasized the importance of value added services in customer value creation. Through value added services companies can extend their service portfolio and gain higher customer satisfaction and loyalty. In more general level service marketing has been recognized to be challenging due the intangible nature of services. This has caused issues in pricing and value perceptions. To tackle these issues scholars have suggested well–managed customer reference marketing practices. The main goal of this research work is to identify shortages in the current service offering. Additionally, the focus is on, how these shortages can be fixed. Due the low capacity utilization of warehouse premises, there is a need to find the main factors, which are causing or affecting on the current situation. The research aims to offer a set of alternatives how to come over these issues. All the potential business opportunities are evaluated and the promising prospects are discussed. The focus is on logistics value added services and how those effect on route decisions in logistics. Simultaneously the aim is to create a holistic understanding of how added value and offered services effect on logistics centralization. Moreover, customer value creation and customer references’ effectiveness in logistics service marketing are emphasized in this project. Logistics value added services have a minor effect on logistics decision. Routes are chosen on a low–cost basis. However, it is challenging to track down logistics costs and break those down into different phases. Customer value as such is a difficult concept. This causes challenges when services are sold with value–based principles. Customer references are useful for logistics service providers and this should be exploited in marketing. Those reduce the perceived risk and give credibility to the service provider.
Resumo:
International audience
Resumo:
Objective of the thesis is to develop project management procedure for chilled beam projects. In organization is recognized that project management techniques could help in large and complex projects. Information sharing have been challenging in projects, so improvement of information sharing is one key topic of the thesis. Academic researches and literature are used to find suitable project management theories and methods. Main theories are related to phases of the project and project management tools. Practical knowledge of project management is collected from two project business oriented companies. Project management tools are chosen and modified to fulfill needs of the beam projects. Result of the thesis is proposed project management procedure, which includes phases of the chilled beam projects and project milestones. Project management procedure helps to recognize the most critical phases of the project and tools help to manage information of the project. Procedure increases knowledge of the project management techniques and tools. It also forms coherent project management working method among the chilled beam project group.
Resumo:
Most economic transactions nowadays are due to the effective exchange of information in which digital resources play a huge role. New actors are coming into existence all the time, so organizations are facing difficulties in keeping their current customers and attracting new customer segments and markets. Companies are trying to find the key to their success and creating superior customer value seems to be one solution. Digital technologies can be used to deliver value to customers in ways that extend customers’ normal conscious experiences in the context of time and space. By creating customer value, companies can gain the increased loyalty of existing customers and better ways to serve new customers effectively. Based on these assumptions, the objective of this study was to design a framework to enable organizations to create customer value in digital business. The research was carried out as a literature review and an empirical study, which consisted of a web-based survey and semi-structured interviews. The data from the empirical study was analyzed as mixed research with qualitative and quantitative methods. These methods were used since the object of the study was to gain deeper understanding about an existing phenomena. Therefore, the study used statistical procedures and value creation is described as a phenomenon. The framework was designed first based on the literature and updated based on the findings from the empirical study. As a result, relationship, understanding the customer, focusing on the core product or service, the product or service quality, incremental innovations, service range, corporate identity, and networks were chosen as the top elements of customer value creation. Measures for these elements were identified. With the measures, companies can manage the elements in value creation when dealing with present and future customers and also manage the operations of the company. In conclusion, creating customer value requires understanding the customer and a lot of information sharing, which can be eased by digital resources. Understanding the customer helps to produce products and services that fulfill customers’ needs and desires. This could result in increased sales and make it easier to establish efficient processes.
Resumo:
This case study aims at filling the research gap in the literature, by researching how customers experience customer involvement in new service development, in addition to giving insight on what are the organisational customers’ motivations to become involved in service development. These subjects are studied by conducting three interviews. The thesis gives a review of previous findings regarding customer-driven new service development, customer involvement, customer roles, modes of involvement, communication in the involvement process, what is the role of customer engagement and what are the motivational drivers for customers. The thesis also explains what new service development is and makes a distinction between new service development and new service design. The results revealed that organisational customers want to be involved throughout the development process, with active involvement in the beginning and end phases. Moreover, customers prefer face-to-face methods and active and bidirectional communication throughout the process. The findings propose seven motivational factors, a new framework for customer-driven new service development and communication process map. The managerial implications list five themes for service providers to take into consideration when involving customers to the service development process.
Resumo:
Tutkimuksen kohteena oleva yritys avasi innovaatiokeskuksen 2015 vuoden loppupuolella. Tutkimuksen tavoite on tutkia keinoja löytää asiakastarpeita innovaatiokeskuksessa sekä selvittää, kuinka asiakastarpeet sisällytetään innovaatio- ja tuotekehitysstrategiaan. Kattava prosessi asiakastarvekartoituksesta esitellään ja prosessi säädetään yritykselle sopivaksi asiakkaille tehdyn kyselyn tulosten mukaan. Lisäksi yrityksen tuotepäälliköille järjestettiin haastattelu, jotta heidän näkemyksiään asiakastarvekartoituksen kehittämisestä ja tarpeiden lisäämisestä strategiaan päästiin myös hyödyntämään. Asiakastarpeiden kartoittamiseen soveltuvaksi menetelmäksi löydettiin ryhmätyömalliin perustuva menetelmä, jossa tarpeita kerätään innovaatiokeskuksessa. Lisäksi tietokoneita hyödyntävä GDSS-kokous auttaa välttämään useita yleisiä kokousten ongelmia. Tutkimuksen mukaan asiakastarpeiden suuret kehityslinjat ja kaikista tärkeimmät tarpeet voidaan lisätä strategiaan hyödyntämällä innovaatiokenttiä, skenaarioita ja roadmappeja sekä asiakastarvetaulukkoja.
Resumo:
As usage metrics continue to attain an increasingly central role in library system assessment and analysis, librarians tasked with system selection, implementation, and support are driven to identify metric approaches that simultaneously require less technical complexity and greater levels of data granularity. Such approaches allow systems librarians to present evidence-based claims of platform usage behaviors while reducing the resources necessary to collect such information, thereby representing a novel approach to real-time user analysis as well as dual benefit in active and preventative cost reduction. As part of the DSpace implementation for the MD SOAR initiative, the Consortial Library Application Support (CLAS) division has begun test implementation of the Google Tag Manager analytic system in an attempt to collect custom analytical dimensions to track author- and university-specific download behaviors. Building on the work of Conrad , CLAS seeks to demonstrate that the GTM approach to custom analytics provides both granular metadata-based usage statistics in an approach that will prove extensible for additional statistical gathering in the future. This poster will discuss the methodology used to develop these custom tag approaches, the benefits of using the GTM model, and the risks and benefits associated with further implementation.
Resumo:
Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.
Resumo:
Agent-based modelling and simulation offers a new and exciting way of understanding the world of work. In this paper we describe the development of an agent-based simulation model, designed to help to understand the relationship between human resource management practices and retail productivity. We report on the current development of our simulation model which includes new features concerning the evolution of customers over time. To test some of these features we have conducted a series of experiments dealing with customer pool sizes, standard and noise reduction modes, and the spread of the word of mouth. Our multidisciplinary research team draws upon expertise from work psychologists and computer scientists. Despite the fact we are working within a relatively novel and complex domain, it is clear that intelligent agents offer potential for fostering sustainable organisational capabilities in the future.
Resumo:
Double Degree
Resumo:
Field lab: Entrepreneurial and innovative ventures
Resumo:
This dissertation investigates customer behavior modeling in service outsourcing and revenue management in the service sector (i.e., airline and hotel industries). In particular, it focuses on a common theme of improving firms’ strategic decisions through the understanding of customer preferences. Decisions concerning degrees of outsourcing, such as firms’ capacity choices, are important to performance outcomes. These choices are especially important in high-customer-contact services (e.g., airline industry) because of the characteristics of services: simultaneity of consumption and production, and intangibility and perishability of the offering. Essay 1 estimates how outsourcing affects customer choices and market share in the airline industry, and consequently the revenue implications from outsourcing. However, outsourcing decisions are typically endogenous. A firm may choose whether to outsource or not based on what a firm expects to be the best outcome. Essay 2 contributes to the literature by proposing a structural model which could capture a firm’s profit-maximizing decision-making behavior in a market. This makes possible the prediction of consequences (i.e., performance outcomes) of future strategic moves. Another emerging area in service operations management is revenue management. Choice-based revenue systems incorporate discrete choice models into traditional revenue management algorithms. To successfully implement a choice-based revenue system, it is necessary to estimate customer preferences as a valid input to optimization algorithms. The third essay investigates how to estimate customer preferences when part of the market is consistently unobserved. This issue is especially prominent in choice-based revenue management systems. Normally a firm only has its own observed purchases, while those customers who purchase from competitors or do not make purchases are unobserved. Most current estimation procedures depend on unrealistic assumptions about customer arriving. This study proposes a new estimation methodology, which does not require any prior knowledge about the customer arrival process and allows for arbitrary demand distributions. Compared with previous methods, this model performs superior when the true demand is highly variable.