960 resultados para Value-Adding
Resumo:
The discussion of a service-dominant logic has made the findings of decades of service marketing research a topic of interest for marketing at large. Some fundamental aspects of the logic such as value creation and its marketing implications are more complex than they have been treated as so far and need to be further developed to serve marketing theory and practice well. Following the analysis in the present article it is argued that although customers are co-producers in service processes, according to the value-in-use notion adopted in the contemporary marketing and management literature they are fundamentally the creators of value for themselves. Furthermore, it is concluded that although by providing goods and services as input resources into customers’ consumption and value-generating processes firms are fundamentally value facilitators, interactions with customers that exist or can be created enable firms to engage themselves with their customers’ processes and thereby they become co-creators of value with their customers. As marketing implications it is observed that 1) the goal of marketing is to support customers’ value creation, 2) following a service logic and due to the existence of interactions where the firm’s and the customer’s processes merge into an integrated joint value creation process, the firm is not restricted to making value propositions only, but can directly and actively influence the customer’s value fulfilment as well and extend its marketing process to include activities during customer-firm interactions, and 3) although all goods and services are consumed as service, customers’ purchasing decisions can be expected to be dependant of whether they have the skills and interest to use a resource, such as a good, as service or want to buy extended market offerings including process-related elements. Finally, the analysis concludes with five service logic theses.
Resumo:
A large volume of literature suggests that information asymmetry resulting from the spatial separation between investors and investments have a significant impact on the composition of investors’ domestic and international portfolios. I show that institutional factors affecting trading in tangible goods help explain a substantial portion of investors’ spatial bias. More importantly, I demonstrate that an information flow medium with breadth and richness directly linked to the bilateral commitment of resources between countries, that I measure by their trading intensity in tangible goods, is consistent with the prevailing country allocation in investors’ international portfolios.
Resumo:
The underpinning logic of value co-creation in service logic is analysed. It is observed that three of the ten foundational premises of the so-called service-dominant logic are problematic and do not support an understanding of value-co-creation and creation that is meaningful for theoretical development and decision making in business and marketing practice. Without a thorough understanding of the interaction concept, the locus and nature of value co-creation cannot be identified. Based on the analysis in the present article it is observed that a unique contribution of a service perspective on business (service logic) is not that customers always are co-creators of value, but that under certain circumstances the service provider gets opportunities to co-create value together with its customers. Finally, the three problematic premises are reformulated accordingly.
Resumo:
Electric activity of the heart consists of repeated cardiomyocyte depolarizations and repolarizations. Abnormalities in repolarization predispose to ventricular arrhythmias. In body surface electrocardiogram, ventricular repolarization generates the T wave. Several electrocardiographic measures have been developed both for clinical and research purposes to detect repolarization abnormalities. The study aim was to investigate modifiers of ventricular repolarization with the focus on the relationship of the left ventricular mass, antihypertensive drugs, and common gene variants, to electrocardiographic repolarization parameters. The prognostic value of repolarization parameters was also assessed. The study subjects originated from a population of more than 200 middle-aged hypertensive men attending the GENRES hypertension study, and from an epidemiological survey, the Health 2000 Study, including more than 6000 participants. Ventricular repolarization was analysed from digital standard 12-lead resting electrocardiograms with two QT-interval based repolarization parameters (QT interval, T-wave peak to T-wave end interval) and with a set of four T-wave morphology parameters. The results showed that in hypertensive men, a linear change in repolarization parameters is present even in the normal range of left ventricular mass, and that even mild left ventricular hypertrophy is associated with potentially adverse electrocardiographic repolarization changes. In addition, treatments with losartan, bisoprolol, amlodipine, and hydrochlorothiazide have divergent short-term effects on repolarization parameters in hypertensive men. Analyses of the general population sample showed that single nucleotide polymorphisms in KCNH2, KCNE1, and NOS1AP genes are associated with changes in QT-interval based repolarization parameters but not consistently with T-wave morphology parameters. T-wave morphology parameters, but not QT interval or T-wave peak to T-wave end interval, provided independent prognostic information on mortality. The prognostic value was specifically related to cardiovascular mortality. The results indicate that, in hypertension, altered ventricular repolarization is already present in mild left ventricular mass increase, and that commonly used antihypertensive drugs may relatively rapidly and treatment-specifically modify electrocardiographic repolarization parameters. Common variants in cardiac ion channel genes and NOS1AP gene may also modify repolarization-related arrhythmia vulnerability. In the general population, T-wave morphology parameters may be useful in the risk assessment of cardiovascular mortality.
Resumo:
The research question of this thesis was how knowledge can be managed with information systems. Information systems can support but not replace knowledge management. Systems can mainly store epistemic organisational knowledge included in content, and process data and information. Certain value can be achieved by adding communication technology to systems. All communication, however, can not be managed. A new layer between communication and manageable information was named as knowformation. Knowledge management literature was surveyed, together with information species from philosophy, physics, communication theory, and information system science. Positivism, post-positivism, and critical theory were studied, but knowformation in extended organisational memory seemed to be socially constructed. A memory management model of an extended enterprise (M3.exe) and knowformation concept were findings from iterative case studies, covering data, information and knowledge management systems. The cases varied from groups towards extended organisation. Systems were investigated, and administrators, users (knowledge workers) and managers interviewed. The model building required alternative sets of data, information and knowledge, instead of using the traditional pyramid. Also the explicit-tacit dichotomy was reconsidered. As human knowledge is the final aim of all data and information in the systems, the distinction between management of information vs. management of people was harmonised. Information systems were classified as the core of organisational memory. The content of the systems is in practice between communication and presentation. Firstly, the epistemic criterion of knowledge is not required neither in the knowledge management literature, nor from the content of the systems. Secondly, systems deal mostly with containers, and the knowledge management literature with applied knowledge. Also the construction of reality based on the system content and communication supports the knowformation concept. Knowformation belongs to memory management model of an extended enterprise (M3.exe) that is divided into horizontal and vertical key dimensions. Vertically, processes deal with content that can be managed, whereas communication can be supported, mainly by infrastructure. Horizontally, the right hand side of the model contains systems, and the left hand side content, which should be independent from each other. A strategy based on the model was defined.
Resumo:
A thermal model for a conventional biogas plant has been developed in order to understand the heat transfer from the slurry and the gas holder to the surrounding earth and air respectively. The computations have been performed for two conditions : (i) when the slurry is at an ambient temperature of 20°C, and (ii) when it is at 35°C, the optimum temperature for anaerobic fermentation. Under both these conditions, the gas holder is the major “culprit” with regard to heat losses from the biogas plant. The calculations provide an estimate for the heat which has to be supplied by external means to compensate for the net heat losses which occur if the slurry is to be maintained at 35°C. Even if this external supply of heat is realised through (the calorific value of) biogas, there is a net increase in the biogas output, and therefore a net benefit, by operating the plant at 35°C. At this elevated temperature, the cooling effect of adding the influent at ambient temperature is not insignificant. In conclusion, the results of the thermal analysis are used to define a strategy for operating biogas plants at optimum temperatures, or at higher temperatures than the ambient.
Resumo:
An exact solution is derived for a boundary-value problem for Laplace's equation which is a generalization of the one occurring in the course of solution of the problem of diffraction of surface water waves by a nearly vertical submerged barrier. The method of solution involves the use of complex function theory, the Schwarz reflection principle, and reduction to a system of two uncoupled Riemann-Hilbert problems. Known results, representing the reflection and transmission coefficients of the water wave problem involving a nearly vertical barrier, are derived in terms of the shape function.
Resumo:
The K-means algorithm for clustering is very much dependent on the initial seed values. We use a genetic algorithm to find a near-optimal partitioning of the given data set by selecting proper initial seed values in the K-means algorithm. Results obtained are very encouraging and in most of the cases, on data sets having well separated clusters, the proposed scheme reached a global minimum.
Resumo:
In order to further develop the logic of service, value creation, value co-creation and value have to be formally and rigorously defined, so that the nature, content and locus of value and the roles of service providers and customers in value creation can be unambiguously assessed. In the present article, following the underpinning logic of value-in-use, it is demonstrated that in order to achieve this, value creation is best defined as the customer’s creation of value-in-use. The analysis shows that the firm’s and customer’s processes and activities can be divided into a provider sphere, closed for the customer, and a customer sphere, closed for the firm. Value creation occurs in the customer sphere, whereas firms in the provider sphere facilitate value creation by producing resources and processes which represent potential value or expected value-in use for their customers. By getting access to the closed customer sphere, firms can create a joint value sphere and engage in customers’ value creation as co-creators of value with them. This approach establishes a theoretically sound foundation for understanding value creation in service logic, and enables meaningful managerial implications, for example as to what is required for co-creation of value, and also further theoretical elaborations.
Resumo:
Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.