29 resultados para Key cutting algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

What is the nature of customer commitment in business-to-business relationships and what are its antecedents? What Key Account Management practices help to build customer commitment? Commitment is an important element of Key Account Management since customer relationships are built upon a the foundation of commitment. Building long-term key account relationships occurs by enhancing and maintaining their commitment. Customer commitment has various antecedents, and managing commitment involves focusing on these antecedents. This paper explains the nature of commitment and describes its antecedents. It also suggests how to manage each of these antecedents to strengthen customer commitment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing focus of relationship marketing and customer relationship management (CRM) studies on issues of customer profitability has led to the emergence of an area of research on profitable customer management. Nevertheless, there is a notable lack of empirical research examining the current practices of firms specifically with regard to the profitable management of customer relationships according to the approaches suggested in theory. This thesis fills this research gap by exploring profitable customer management in the retail banking sector. Several topics are covered, including marketing metrics and accountability; challenges in the implementation of profitable customer management approaches in practice; analytic versus heuristic (‘rule of thumb’) decision making; and the modification of costly customer behavior in order to increase customer profitability, customer lifetime value (CLV), and customer equity, i.e. the financial value of the customer base. The thesis critically reviews the concept of customer equity and proposes a Customer Equity Scorecard, providing a starting point for a constructive dialog between marketing and finance concerning the development of appropriate metrics to measure marketing outcomes. Since customer management and measurement issues go hand in hand, profitable customer management is contingent on both marketing management skills and financial measurement skills. A clear gap between marketing theory and practice regarding profitable customer management is also identified. The findings show that key customer management aspects that have been proposed within the literature on profitable customer management for many years, are not being actively applied by the banks included in the research. Instead, several areas of customer management decision making are found to be influenced by heuristics. This dilemma for marketing accountability is addressed by emphasizing that CLV and customer equity, which are aggregate metrics, only provide certain indications regarding the relative value of customers and the approximate value of the customer base (or groups of customers), respectively. The value created by marketing manifests itself in the effect of marketing actions on customer perceptions, behavior, and ultimately the components of CLV, namely revenues, costs, risk, and retention, as well as additional components of customer equity, such as customer acquisition. The thesis also points out that although costs are a crucial component of CLV, they have largely been neglected in prior CRM research. Cost-cutting has often been viewed negatively in customer-focused marketing literature on service quality and customer profitability, but the case studies in this thesis demonstrate that reduced costs do not necessarily have to lead to lower service quality, customer retention, and customer-related revenues. Consequently, this thesis provides an expanded foundation upon which marketers can stake their claim for accountability. By focusing on the range of drivers and all of the components of CLV and customer equity, marketing has the potential to provide specific evidence concerning how various activities have affected the drivers and components of CLV within different groups of customers, and the implications for customer equity on a customer base level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analysis of lipid compositions from biological samples has become increasingly important. Lipids have a role in cardiovascular disease, metabolic syndrome and diabetes. They also participate in cellular processes such as signalling, inflammatory response, aging and apoptosis. Also, the mechanisms of regulation of cell membrane lipid compositions are poorly understood, partially because a lack of good analytical methods. Mass spectrometry has opened up new possibilities for lipid analysis due to its high resolving power, sensitivity and the possibility to do structural identification by fragment analysis. The introduction of Electrospray ionization (ESI) and the advances in instrumentation revolutionized the analysis of lipid compositions. ESI is a soft ionization method, i.e. it avoids unwanted fragmentation the lipids. Mass spectrometric analysis of lipid compositions is complicated by incomplete separation of the signals, the differences in the instrument response of different lipids and the large amount of data generated by the measurements. These factors necessitate the use of computer software for the analysis of the data. The topic of the thesis is the development of methods for mass spectrometric analysis of lipids. The work includes both computational and experimental aspects of lipid analysis. The first article explores the practical aspects of quantitative mass spectrometric analysis of complex lipid samples and describes how the properties of phospholipids and their concentration affect the response of the mass spectrometer. The second article describes a new algorithm for computing the theoretical mass spectrometric peak distribution, given the elemental isotope composition and the molecular formula of a compound. The third article introduces programs aimed specifically for the analysis of complex lipid samples and discusses different computational methods for separating the overlapping mass spectrometric peaks of closely related lipids. The fourth article applies the methods developed by simultaneously measuring the progress curve of enzymatic hydrolysis for a large number of phospholipids, which are used to determine the substrate specificity of various A-type phospholipases. The data provides evidence that the substrate efflux from bilayer is the key determining factor for the rate of hydrolysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Thesis presents a state-space model for a basketball league and a Kalman filter algorithm for the estimation of the state of the league. In the state-space model, each of the basketball teams is associated with a rating that represents its strength compared to the other teams. The ratings are assumed to evolve in time following a stochastic process with independent Gaussian increments. The estimation of the team ratings is based on the observed game scores that are assumed to depend linearly on the true strengths of the teams and independent Gaussian noise. The team ratings are estimated using a recursive Kalman filter algorithm that produces least squares optimal estimates for the team strengths and predictions for the scores of the future games. Additionally, if the Gaussianity assumption holds, the predictions given by the Kalman filter maximize the likelihood of the observed scores. The team ratings allow probabilistic inference about the ranking of the teams and their relative strengths as well as about the teams’ winning probabilities in future games. The predictions about the winners of the games are correct 65-70% of the time. The team ratings explain 16% of the random variation observed in the game scores. Furthermore, the winning probabilities given by the model are concurrent with the observed scores. The state-space model includes four independent parameters that involve the variances of noise terms and the home court advantage observed in the scores. The Thesis presents the estimation of these parameters using the maximum likelihood method as well as using other techniques. The Thesis also gives various example analyses related to the American professional basketball league, i.e., National Basketball Association (NBA), and regular seasons played in year 2005 through 2010. Additionally, the season 2009-2010 is discussed in full detail, including the playoffs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two methods of pre-harvest inventory were designed and tested on three cutting sites containing a total of 197 500 m3 of wood. These sites were located on flat-ground boreal forests located in northwestern Quebec. Both methods studied involved scaling of trees harvested to clear the road path one year (or more) prior to harvest of adjacent cut-blocks. The first method (ROAD) considers the total road right-of-way volume divided by the total road area cleared. The resulting volume per hectare is then multiplied by the total cut-block area scheduled for harvest during the following year to obtain the total estimated cutting volume. The second method (STRATIFIED) also involves scaling of trees cleared from the road. However, in STRATIFIED, log scaling data are stratified by forest stand location. A volume per hectare is calculated for each stretch of road that crosses a single forest stand. This volume per hectare is then multiplied by the remaining area of the same forest stand scheduled for harvest one year later. The sum of all resulting estimated volumes per stand gives the total estimated cutting-volume for all cut-blocks adjacent to the studied road. A third method (MNR) was also used to estimate cut-volumes of the sites studied. This method represents the actual existing technique for estimating cutting volume in the province of Quebec. It involves summing the cut volume for all forest stands. The cut volume is estimated by multiplying the area of each stand by its estimated volume per hectare obtained from standard stock tables provided by the governement. The resulting total estimated volume per cut-block for all three methods was then compared with the actual measured cut-block volume (MEASURED). This analysis revealed a significant difference between MEASURED and MNR methods with the MNR volume estimate being 30 % higher than MEASURED. However, no significant difference from MEASURED was observed for volume estimates for the ROAD and STRATIFIED methods which respectively had estimated cutting volumes 19 % and 5 % lower than MEASURED. Thus the ROAD and STRATIFIED methods are good ways to estimate cut-block volumes after road right-of-way harvest for conditions similar to those examined in this study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose–accept rounds executed by the Gale–Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost stable matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. We apply our results to give a distributed (2 + ε)-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a distributed 2-approximation algorithm for the minimum vertex cover problem. The algorithm is deterministic, and it runs in (Δ + 1)2 synchronous communication rounds, where Δ is the maximum degree of the graph. For Δ = 3, we give a 2-approximation algorithm also for the weighted version of the problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a local algorithm (constant-time distributed algorithm) for finding a 3-approximate vertex cover in bounded-degree graphs. The algorithm is deterministic, and no auxiliary information besides port numbering is required. (c) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a distributed 2-approximation algorithm for the minimum vertex cover problem. The algorithm is deterministic, and it runs in (Δ + 1)2 synchronous communication rounds, where Δ is the maximum degree of the graph. For Δ = 3, we give a 2-approximation algorithm also for the weighted version of the problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a max-min LP, the objective is to maximise ω subject to Ax ≤ 1, Cx ≥ ω1, and x ≥ 0 for nonnegative matrices A and C. We present a local algorithm (constant-time distributed algorithm) for approximating max-min LPs. The approximation ratio of our algorithm is the best possible for any local algorithm; there is a matching unconditional lower bound.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Microbes in natural and artificial environments as well as in the human body are a key part of the functional properties of these complex systems. The presence or absence of certain microbial taxa is a correlate of functional status like risk of disease or course of metabolic processes of a microbial community. As microbes are highly diverse and mostly notcultivable, molecular markers like gene sequences are a potential basis for detection and identification of key types. The goal of this thesis was to study molecular methods for identification of microbial DNA in order to develop a tool for analysis of environmental and clinical DNA samples. Particular emphasis was placed on specificity of detection which is a major challenge when analyzing complex microbial communities. The approach taken in this study was the application and optimization of enzymatic ligation of DNA probes coupled with microarray read-out for high-throughput microbial profiling. The results show that fungal phylotypes and human papillomavirus genotypes could be accurately identified from pools of PCR amplicons generated from purified sample DNA. Approximately 1 ng/μl of sample DNA was needed for representative PCR amplification as measured by comparisons between clone sequencing and microarray. A minimum of 0,25 amol/μl of PCR amplicons was detectable from amongst 5 ng/μl of background DNA, suggesting that the detection limit of the test comprising of ligation reaction followed by microarray read-out was approximately 0,04%. Detection from sample DNA directly was shown to be feasible with probes forming a circular molecule upon ligation followed by PCR amplification of the probe. In this approach, the minimum detectable relative amount of target genome was found to be 1% of all genomes in the sample as estimated from 454 deep sequencing results. Signal-to-noise of contact printed microarrays could be improved by using an internal microarray hybridization control oligonucleotide probe together with a computational algorithm. The algorithm was based on identification of a bias in the microarray data and correction of the bias as shown by simulated and real data. The results further suggest semiquantitative detection to be possible by ligation detection, allowing estimation of target abundance in a sample. However, in practise, comprehensive sequence information of full length rRNA genes is needed to support probe design with complex samples. This study shows that DNA microarray has the potential for an accurate microbial diagnostic platform to take advantage of increasing sequence data and to replace traditional, less efficient methods that still dominate routine testing in laboratories. The data suggests that ligation reaction based microarray assay can be optimized to a degree that allows good signal-tonoise and semiquantitative detection.