918 resultados para 150507 Pricing (incl. Consumer Value Estimation)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors apply economic theory to an analysis of industry pricing. Data from a cross-section of San Francisco hotels is used to estimate the implicit prices of common hotel amenities, and a procedure for using these prices to estimate consumer demands for the attributes is outlined. The authors then suggest implications for hotel decision makers. While the results presented here should not be generalized to other markets, the methodology is easily adapted to other geographic areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wine reviews, such as those from Wine Spectator and other consumer publications, help drive wine sales. The researchers in this study utilized standardized wholesale “line pricing” from a major wholesale distributor in the Southwest to compare pricing to the ratings published by Wine Spectator and to determine whether there were any correlations among other key attributes of the wine. The study produced interesting results, including that the wholesale price and vintage of a wine are significant in the prediction of the wine’s rating.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the discussion - Indirect Cost Factors in Menu Pricing – by David V. Pavesic, Associate Professor, Hotel, Restaurant and Travel Administration at Georgia State University, Associate Professor Pavesic initially states: “Rational pricing methodologies have traditionally employed quantitative factors to mark up food and beverage or food and labor because these costs can be isolated and allocated to specific menu items. There are, however, a number of indirect costs that can influence the price charged because they provide added value to the customer or are affected by supply/demand factors. The author discusses these costs and factors that must be taken into account in pricing decisions. Professor Pavesic offers as a given that menu pricing should cover costs, return a profit, reflect a value for the customer, and in the long run, attract customers and market the establishment. “Prices that are too high will drive customers away, and prices that are too low will sacrifice profit,” Professor Pavesic puts it succinctly. To dovetail with this premise the author provides that although food costs measure markedly into menu pricing, other factors such as equipment utilization, popularity/demand, and marketing are but a few of the parenthetic factors also to be considered. “… there is no single method that can be used to mark up every item on any given restaurant menu. One must employ a combination of methodologies and theories,” says Professor Pavesic. “Therefore, when properly carried out, prices will reflect food cost percentages, individual and/or weighted contribution margins, price points, and desired check averages, as well as factors driven by intuition, competition, and demand.” Additionally, Professor Pavesic wants you to know that value, as opposed to maximizing revenue, should be a primary motivating factor when designing menu pricing. This philosophy does come with certain caveats, and he explains them to you. Generically speaking, Professor Pavesic says, “The market ultimately determines the price one can charge.” But, in fine-tuning that decree he further offers, “Lower prices do not automatically translate into value and bargain in the minds of the customers. Having the lowest prices in your market may not bring customers or profit. “Too often operators engage in price wars through discount promotions and find that profits fall and their image in the marketplace is lowered,” Professor Pavesic warns. In reference to intangibles that influence menu pricing, service is at the top of the list. Ambience, location, amenities, product [i.e. food] presentation, and price elasticity are discussed as well. Be aware of price-value perception; Professor Pavesic explains this concept to you. Professor Pavesic closes with a brief overview of a la carte pricing; its pros and cons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pension funds have been part of the private sector since the 1850's. Defined Benefit pension plans [DB], where a company promises to make regular contributions to investment accounts held for participating employees in order to pay a promised lifelong annuity, are significant capital markets participants, amounting to 2.3 trillion dollars in 2010 (Federal Reserve Board, 2013). In 2006, Statement of Financial Accounting Standards No.158 (SFAS 158), Employers' Accounting for Defined Benefit Pension and Other Postemployment Plans, shifted information concerning funding status and pension asset/liability composition from disclosure in the footnotes to recognition in the financial statements. I add to the literature by being the first to examine the effect of recent pension reform during the financial crisis of 2008-09. This dissertation is comprised of three related essays. In my first essay, I investigate whether investors assign different pricing multiples to the various classes of pension assets when valuing firms. The pricing multiples on all classes of assets are significantly different from each other, but only investments in bonds and equities were value-relevant during the recent financial crisis. Consistent with investors viewing pension liabilities as liabilities of the firm, the pricing multiples on pension liabilities are significantly larger than those on non-pension liabilities. The only pension costs significantly associated with firm value are actual rate of return and interest expense. In my second essay, I investigate the role of accruals in predicting future cash flows, extending the Barth et al. (2001a) model of the accrual process. Using market value of equity as a proxy for cash flows, the results of this study suggest that aggregate accounting amounts mask how the components of earnings affect investors' ability to predict future cash flows. Disaggregating pension earnings components and accruals results in an increase in predictive power. During the 2008-2009 financial crisis, however, investors placed a greater (and negative) weight on the incremental information contained in the individual components of accruals. The inferences are robust to alternative specifications of accruals. Finally, in my third essay I investigate how investors view under-funded plans. On average, investors: view deficits arising from under-funded plans as belonging to the firm; reward firms with fully or over-funded pension plans; and encourage those funds with unfunded pension plans to become funded. Investors also encourage conservative pension asset allocations to mitigate firm risk, and smaller firms are perceived as being better able to handle the risk associated with underfunded plans. During the financial crisis of 2008-2009 underfunded status had a lower negative association with market value. In all three models, there are significant differences in pre- and post- SFAS 158 periods. These results are robust to various scenarios of the timing of the financial crisis and an alternative measure of funding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fixed-step-size (FSS) and Bayesian staircases are widely used methods to estimate sensory thresholds in 2AFC tasks, although a direct comparison of both types of procedure under identical conditions has not previously been reported. A simulation study and an empirical test were conducted to compare the performance of optimized Bayesian staircases with that of four optimized variants of FSS staircase differing as to up-down rule. The ultimate goal was to determine whether FSS or Bayesian staircases are the best choice in experimental psychophysics. The comparison considered the properties of the estimates (i.e. bias and standard errors) in relation to their cost (i.e. the number of trials to completion). The simulation study showed that mean estimates of Bayesian and FSS staircases are dependable when sufficient trials are given and that, in both cases, the standard deviation (SD) of the estimates decreases with number of trials, although the SD of Bayesian estimates is always lower than that of FSS estimates (and thus, Bayesian staircases are more efficient). The empirical test did not support these conclusions, as (1) neither procedure rendered estimates converging on some value, (2) standard deviations did not follow the expected pattern of decrease with number of trials, and (3) both procedures appeared to be equally efficient. Potential factors explaining the discrepancies between simulation and empirical results are commented upon and, all things considered, a sensible recommendation is for psychophysicists to run no fewer than 18 and no more than 30 reversals of an FSS staircase implementing the 1-up/3-down rule.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Threshold estimation with sequential procedures is justifiable on the surmise that the index used in the so-called dynamic stopping rule has diagnostic value for identifying when an accurate estimate has been obtained. The performance of five types of Bayesian sequential procedure was compared here to that of an analogous fixed-length procedure. Indices for use in sequential procedures were: (1) the width of the Bayesian probability interval, (2) the posterior standard deviation, (3) the absolute change, (4) the average change, and (5) the number of sign fluctuations. A simulation study was carried out to evaluate which index renders estimates with less bias and smaller standard error at lower cost (i.e. lower average number of trials to completion), in both yes–no and two-alternative forced-choice (2AFC) tasks. We also considered the effect of the form and parameters of the psychometric function and its similarity with themodel function assumed in the procedure. Our results show that sequential procedures do not outperform fixed-length procedures in yes–no tasks. However, in 2AFC tasks, sequential procedures not based on sign fluctuations all yield minimally better estimates than fixed-length procedures, although most of the improvement occurs with short runs that render undependable estimates and the differences vanish when the procedures run for a number of trials (around 70) that ensures dependability. Thus, none of the indices considered here (some of which are widespread) has the diagnostic value that would justify its use. In addition, difficulties of implementation make sequential procedures unfit as alternatives to fixed-length procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose

The objective of our study was to test a new approach to approximating organ dose by using the effective energy of the combined 80kV/140kV beam used in fast kV switch dual-energy (DE) computed tomography (CT). The two primary focuses of the study were to first validate experimentally the dose equivalency between MOSFET and ion chamber (as a gold standard) in a fast kV switch DE environment, and secondly to estimate effective dose (ED) of DECT scans using MOSFET detectors and an anthropomorphic phantom.

Materials and Methods

A GE Discovery 750 CT scanner was employed using a fast-kV switch abdomen/pelvis protocol alternating between 80 kV and 140 kV. The specific aims of our study were to (1) Characterize the effective energy of the dual energy environment; (2) Estimate the f-factor for soft tissue; (3) Calibrate the MOSFET detectors using a beam with effective energy equal to the combined DE environment; (4) Validate our calibration by using MOSFET detectors and ion chamber to measure dose at the center of a CTDI body phantom; (5) Measure ED for an abdomen/pelvis scan using an anthropomorphic phantom and applying ICRP 103 tissue weighting factors; and (6) Estimate ED using AAPM Dose Length Product (DLP) method. The effective energy of the combined beam was calculated by measuring dose with an ion chamber under varying thicknesses of aluminum to determine half-value layer (HVL).

Results

The effective energy of the combined dual-energy beams was found to be 42.8 kV. After calibration, tissue dose in the center of the CTDI body phantom was measured at 1.71 ± 0.01 cGy using an ion chamber, and 1.73±0.04 and 1.69±0.09 using two separate MOSFET detectors. This result showed a -0.93% and 1.40 % difference, respectively, between ion chamber and MOSFET. ED from the dual-energy scan was calculated as 16.49 ± 0.04 mSv by the MOSFET method and 14.62 mSv by the DLP method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding consumer behavior is critical for firms' decision making. How consumers make decisions about what they want and buy directly affect the profits of firms. Therefore, it is important to consider consumer behaviors and incorporate them into the model when studying the optimal strategy of firms and competition between firms. In this dissertation, I study rich and interesting consumer behaviors and their impact on firms' strategy in two essays. The first essay considers consumers' shopping cost which leads to their preference for one-stop shopping. I examine how store visit costs and consumer knowledge about a product affect the strategic store choice of consumers and, in turn, the pricing, customer service and advertising decisions of competing retailers. My analysis offers insights on how specialty stores can compete with big-box retailers. In the second essay, I focus on a well-established psychology phenomenon, cognitive dissonance. I incorporate the idea of cognitive dissonance into a model of spatial competition and examine its implications for selling strategy. I provide new insight on the profitability of advance selling and spot selling as well as the pricing of bundle and its components. Collectively, two essays in this dissertation introduce novel ways to model consumer behaviors and help to understand the impact of consumer behaviors on firm profitability and strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Marketers have long looked for observables that could explain differences in consumer behavior. Initial attempts have centered on demographic factors, such as age, gender, and race. Although such variables are able to provide some useful information for segmentation (Bass, Tigert, and Longdale 1968), more recent studies have shown that variables that tap into consumers’ social classes and personal values have more predictive accuracy and also provide deeper insights into consumer behavior. I argue that one demographic construct, religion, merits further consideration as a factor that has a profound impact on consumer behavior. In this dissertation, I focus on two types of religious guidance that may influence consumer behaviors: religious teachings (being content with one’s belongings), and religious problem-solving styles (reliance on God).

Essay 1 focuses on the well-established endowment effect and introduces a new moderator (religious teachings on contentment) that influences both owner and buyers’ pricing behaviors. Through fifteen experiments, I demonstrate that when people are primed with religion or characterized by stronger religious beliefs, they tend to value their belongings more than people who are not primed with religion or who have weaker religious beliefs. These effects are caused by religious teachings on being content with one’s belongings, which lead to the overvaluation of one’s own possessions.

Essay 2 focuses on self-control behaviors, specifically healthy eating, and introduces a new moderator (God’s role in the decision-making process) that determines the relationship between religiosity and the healthiness of food choices. My findings demonstrate that consumers who indicate that they defer to God in their decision-making make unhealthier food choices as their religiosity increases. The opposite is true for consumers who rely entirely on themselves. Importantly, this relationship is mediated by the consumer’s consideration of future consequences. This essay provides an explanation to the existing mixed findings on the relationship between religiosity and obesity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the importance of gender in consumer research, one might expect feminist perspectives to be at the forefront of critical engagement with consumer behaviour theory. However, in recent years, feminist voices have been barely audible. This paper explores the value of, and insights offered by, feminist theories and feminist activism, and how feminist theory and practice has altered our understanding of gendered consumption. It then argues that postmodern and postfeminist perspectives have diluted feminism's transformative potential, leading to a critical impasse in marketing and consumer research. In conclusion, we suggest that feminist perspectives, notably materialist feminism, might open up fresh new possibilities for critique, and interesting and worthwhile areas for transformative research in consumer behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ticket distribution channels for live music events have been revolutionised through the increased take-up of internet technologies, and the music supply-chain has evolved into a multi-channel value network. The assumption that this creates increased consumer autonomy and improved service quality is explored here through a case-study of the ticket pre-sale for the US leg of the Depeche Mode 2005–06 World Tour, which utilises an innovative virtual channel strategy, promoted as a service to loyal fans. A multi-method analysis, adopting Kozinets' (2002) Kozinets, R. V. 2002. The field behind the screen: using netnography for marketing research in online communities. Journal of Marketing Research, 39: 61–72. [CrossRef], [Web of Science ®] netnography methodology, is employed to map responses of the band's serious fan base on an internet message board (IMB) throughout the tour pre-sale. The analysis focuses on concerns of pricing, ethics, scope of the offer, use of technology, service quality and perceived brand performance fit of channel partners. Findings indicate that fans behaviour is unpredictable in response to channel partners' performance, and that such offers need careful management to avoid alienation of loyal consumers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim Companies around the world are making sizeable investments into CSR initiatives, but ensuring appropriate returns on these investments remains challenging. Therefore, it is of value to study the communication of corporate CSR efforts. The purpose of this study is to investigate how consumers react to rational versus emotional message strategies in CSR communication. Two categories of consumer reactions were considered: trust and purchase intention. Methods Qualitative research with four focus groups was conducted. Participants discussed three texts regarding a CSR project, utilising a rational, emotional and a hybrid rational-emotional message strategy respectively. The conversations focused on trust towards the communication and purchase intention. Results Trust - All of the respondents viewed the rational text over the emotional text as more trustworthy, but they most positively reacted to the combined strategy. Rational information was viewed as more reliable by many participants, with emotional cues adding value by better holding their attention. Purchase intention – Participants more positively reacted to the rational CSR communication strategy, compared to an emotional strategy. For approximately half of respondents, the hybrid strategy targeting both rational and emotional cues was the most successful in terms of purchase intention. Upon further analysis, it was identified that this division in respondents’ opinions may reflect a gender difference, where men portrayed the more task oriented and women the socially sensitive consumers. Conclusions The findings support previous research championing the use of rational strategies over emotional strategies in CSR communication. A number of managerial implications that can be used by companies in order to better communicate their CSR activities and increase returns on CSR-related investments are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The value premium is well established in empirical asset pricing, but to date there is little understanding as to its fundamental drivers. We use a stochastic earnings valuation model to establish a direct link between the volatility of future earnings growth and firm value. We illustrate that risky earnings growth affects growth and value firms differently. We provide empirical evidence that the volatility of future earnings growth is a significant determinant of the value premium. Using data on individual firms and characteristic-sorted test portfolios, we also find that earnings growth volatility is significant in explaining the cross-sectional variation of stock returns. Our findings imply that the value premium is the rational consequence of accounting for risky earnings growth in the firm valuation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les préhenseurs robotiques sont largement utilisés en industrie et leur déploiement pourrait être encore plus important si ces derniers étaient plus intelligents. En leur conférant des capacités tactiles et une intelligence leur permettant d’estimer la pose d’un objet saisi, une plus vaste gamme de tâches pourraient être accomplies par les robots. Ce mémoire présente le développement d’algorithmes d’estimation de la pose d’objets saisis par un préhenseur robotique. Des algorithmes ont été développés pour trois systèmes robotisés différents, mais pour les mêmes considérations. Effectivement, pour les trois systèmes la pose est estimée uniquement à partir d’une saisie d’objet, de données tactiles et de la configuration du préhenseur. Pour chaque système, la performance atteignable pour le système minimaliste étudié est évaluée. Dans ce mémoire, les concepts généraux sur l’estimation de la pose sont d’abord exposés. Ensuite, un préhenseur plan à deux doigts comprenant deux phalanges chacun est modélisé dans un environnement de simulation et un algorithme permettant d’estimer la pose d’un objet saisi par le préhenseur est décrit. Cet algorithme est basé sur les arbres d’interprétation et l’algorithme de RANSAC. Par la suite, un système expérimental plan comprenant une phalange supplémentaire par doigt est modélisé et étudié pour le développement d’un algorithme approprié d’estimation de la pose. Les principes de ce dernier sont similaires au premier algorithme, mais les capteurs compris dans le système sont moins précis et des adaptations et améliorations ont dû être appliquées. Entre autres, les mesures des capteurs ont été mieux exploitées. Finalement, un système expérimental spatial composé de trois doigts comprenant trois phalanges chacun est étudié. Suite à la modélisation, l’algorithme développé pour ce système complexe est présenté. Des hypothèses partiellement aléatoires sont générées, complétées, puis évaluées. L’étape d’évaluation fait notamment appel à l’algorithme de Levenberg-Marquardt.