918 resultados para 150507 Pricing (incl. Consumer Value Estimation)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prior research shows that electronic word of mouth (eWOM) wields considerable influence over consumer behavior. However, as the volume and variety of eWOM grows, firms are faced with challenges in analyzing and responding to this information. In this dissertation, I argue that to meet the new challenges and opportunities posed by the expansion of eWOM and to more accurately measure its impacts on firms and consumers, we need to revisit our methodologies for extracting insights from eWOM. This dissertation consists of three essays that further our understanding of the value of social media analytics, especially with respect to eWOM. In the first essay, I use machine learning techniques to extract semantic structure from online reviews. These semantic dimensions describe the experiences of consumers in the service industry more accurately than traditional numerical variables. To demonstrate the value of these dimensions, I show that they can be used to substantially improve the accuracy of econometric models of firm survival. In the second essay, I explore the effects on eWOM of online deals, such as those offered by Groupon, the value of which to both consumers and merchants is controversial. Through a combination of Bayesian econometric models and controlled lab experiments, I examine the conditions under which online deals affect online reviews and provide strategies to mitigate the potential negative eWOM effects resulting from online deals. In the third essay, I focus on how eWOM can be incorporated into efforts to reduce foodborne illness, a major public health concern. I demonstrate how machine learning techniques can be used to monitor hygiene in restaurants through crowd-sourced online reviews. I am able to identify instances of moral hazard within the hygiene inspection scheme used in New York City by leveraging a dictionary specifically crafted for this purpose. To the extent that online reviews provide some visibility into the hygiene practices of restaurants, I show how losses from information asymmetry may be partially mitigated in this context. Taken together, this dissertation contributes by revisiting and refining the use of eWOM in the service sector through a combination of machine learning and econometric methodologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop the energy norm a-posteriori error estimation for hp-version discontinuous Galerkin (DG) discretizations of elliptic boundary-value problems on 1-irregularly, isotropically refined affine hexahedral meshes in three dimensions. We derive a reliable and efficient indicator for the errors measured in terms of the natural energy norm. The ratio of the efficiency and reliability constants is independent of the local mesh sizes and weakly depending on the polynomial degrees. In our analysis we make use of an hp-version averaging operator in three dimensions, which we explicitly construct and analyze. We use our error indicator in an hp-adaptive refinement algorithm and illustrate its practical performance in a series of numerical examples. Our numerical results indicate that exponential rates of convergence are achieved for problems with smooth solutions, as well as for problems with isotropic corner singularities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Even though much attention has been paid to online consumer behavior, academic studies are deficient in comprehending offline consumer behavior. This study offers a survey of reflections concerning the Portuguese offline consumer behavior by observing how Portuguese adult consumers engage, embrace and act throughout the offline world, i.e., the offline media channels and the customer decision-making process at a store in regard of digital nativity, education and gender. Drawing on an online questionnaire and using a convenience sample of 471 respondents, data was analyzed using descriptive analysis and independent sample t-test analysis. The results observed indicate Portuguese consumers prefer calling or going to a store when they have an operational problem, value the credit card security at a store and that Portuguese females highly value touching and feeling the product at a store. Finally, implications for academics and marketeers are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investors value the special attributes of monetary assets (e.g., exchangeability, liquidity, and safety) and pay a premium for holding them in the form of a lower return rate -- The user cost of holding monetary assets can be measured approximately by the difference between the returns on illiquid risky assets and those of safer liquid assets -- A more appropriate measure should adjust this difference by the differential risk of the assets in question -- We investigate the impact that time non-separable preferences has on the estimation of the risk-adjusted user cost of money -- Using U.K. data from 1965Q1 to 2011Q1, we estimate a habit-based asset pricing model with money in the utility function and find that the risk adjustment for risky monetary assets is negligible -- Thus, researchers can dispense with risk adjusting the user cost of money in constructing monetary aggregate indexes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MAT-LAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/(Mentaschi et al., 2016).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the objectives of this study is to perform classification of socio-demographic components for the level of city section in City of Lisbon. In order to accomplish suitable platform for the restaurant potentiality map, the socio-demographic components were selected to produce a map of spatial clusters in accordance to restaurant suitability. Consequently, the second objective is to obtain potentiality map in terms of underestimation and overestimation in number of restaurants. To the best of our knowledge there has not been found identical methodology for the estimation of restaurant potentiality. The results were achieved with combination of SOM (Self-Organized Map) which provides a segmentation map and GAM (Generalized Additive Model) with spatial component for restaurant potentiality. Final results indicate that the highest influence in restaurant potentiality is given to tourist sites, spatial autocorrelation in terms of neighboring restaurants (spatial component), and tax value, where lower importance is given to household with 1 or 2 members and employed population, respectively. In addition, an important conclusion is that the most attractive market sites have shown no change or moderate underestimation in terms of restaurants potentiality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks are often deployed in large numbers, over a large geographical region, in order to monitor the phenomena of interest. Sensors used in the sensor networks often suffer from random or systematic errors such as drift and bias. Even if they are calibrated at the time of deployment, they tend to drift as time progresses. Consequently, the progressive manual calibration of such a large-scale sensor network becomes impossible in practice. In this article, we address this challenge by proposing a collaborative framework to automatically detect and correct the drift in order to keep the data collected from these networks reliable. We propose a novel scheme that uses geospatial estimation-based interpolation techniques on measurements from neighboring sensors to collaboratively predict the value of phenomenon being observed. The predicted values are then used iteratively to correct the sensor drift by means of a Kalman filter. Our scheme can be implemented in a centralized as well as distributed manner to detect and correct the drift generated in the sensors. For centralized implementation of our scheme, we compare several krigingand nonkriging-based geospatial estimation techniques in combination with the Kalman filter, and show the superiority of the kriging-based methods in detecting and correcting the drift. To demonstrate the applicability of our distributed approach on a real world application scenario, we implement our algorithm on a network consisting of Wireless Sensor Network (WSN) hardware. We further evaluate single as well as multiple drifting sensor scenarios to show the effectiveness of our algorithm for detecting and correcting drift. Further, we address the issue of high power usage for data transmission among neighboring nodes leading to low network lifetime for the distributed approach by proposing two power saving schemes. Moreover, we compare our algorithm with a blind calibration scheme in the literature and demonstrate its superiority in detecting both linear and nonlinear drifts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pitch Estimation, also known as Fundamental Frequency (F0) estimation, has been a popular research topic for many years, and is still investigated nowadays. The goal of Pitch Estimation is to find the pitch or fundamental frequency of a digital recording of a speech or musical notes. It plays an important role, because it is the key to identify which notes are being played and at what time. Pitch Estimation of real instruments is a very hard task to address. Each instrument has its own physical characteristics, which reflects in different spectral characteristics. Furthermore, the recording conditions can vary from studio to studio and background noises must be considered. This dissertation presents a novel approach to the problem of Pitch Estimation, using Cartesian Genetic Programming (CGP).We take advantage of evolutionary algorithms, in particular CGP, to explore and evolve complex mathematical functions that act as classifiers. These classifiers are used to identify piano notes pitches in an audio signal. To help us with the codification of the problem, we built a highly flexible CGP Toolbox, generic enough to encode different kind of programs. The encoded evolutionary algorithm is the one known as 1 + , and we can choose the value for . The toolbox is very simple to use. Settings such as the mutation probability, number of runs and generations are configurable. The cartesian representation of CGP can take multiple forms and it is able to encode function parameters. It is prepared to handle with different type of fitness functions: minimization of f(x) and maximization of f(x) and has a useful system of callbacks. We trained 61 classifiers corresponding to 61 piano notes. A training set of audio signals was used for each of the classifiers: half were signals with the same pitch as the classifier (true positive signals) and the other half were signals with different pitches (true negative signals). F-measure was used for the fitness function. Signals with the same pitch of the classifier that were correctly identified by the classifier, count as a true positives. Signals with the same pitch of the classifier that were not correctly identified by the classifier, count as a false negatives. Signals with different pitch of the classifier that were not identified by the classifier, count as a true negatives. Signals with different pitch of the classifier that were identified by the classifier, count as a false positives. Our first approach was to evolve classifiers for identifying artifical signals, created by mathematical functions: sine, sawtooth and square waves. Our function set is basically composed by filtering operations on vectors and by arithmetic operations with constants and vectors. All the classifiers correctly identified true positive signals and did not identify true negative signals. We then moved to real audio recordings. For testing the classifiers, we picked different audio signals from the ones used during the training phase. For a first approach, the obtained results were very promising, but could be improved. We have made slight changes to our approach and the number of false positives reduced 33%, compared to the first approach. We then applied the evolved classifiers to polyphonic audio signals, and the results indicate that our approach is a good starting point for addressing the problem of Pitch Estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Company valuation models attempt to estimate the value of a company in two stages: (1) comprising of a period of explicit analysis and (2) based on unlimited production period of cash flows obtained through a mathematical approach of perpetuity, which is the terminal value. In general, these models, whether they belong to the Dividend Discount Model (DDM), the Discount Cash Flow (DCF), or RIM (Residual Income Models) group, discount one attribute (dividends, free cash flow, or results) to a given discount rate. This discount rate, obtained in most cases by the CAPM (Capital asset pricing model) or APT (Arbitrage pricing theory) allows including in the analysis the cost of invested capital based on the risk taking of the attributes. However, one cannot ignore that the second stage of valuation that is usually 53-80% of the company value (Berkman et al., 1998) and is loaded with uncertainties. In this context, particular attention is needed to estimate the value of this portion of the company, under penalty of the assessment producing a high level of error. Mindful of this concern, this study sought to collect the perception of European and North American financial analysts on the key features of the company that they believe contribute most to its value. For this feat, we used a survey with closed answers. From the analysis of 123 valid responses using factor analysis, the authors conclude that there is great importance attached (1) to the life expectancy of the company, (2) to liquidity and operating performance, (3) to innovation and ability to allocate resources to R&D, and (4) to management capacity and capital structure, in determining the value of a company or business in long term. These results contribute to our belief that we can formulate a model for valuating companies and businesses where the results to be obtained in the evaluations are as close as possible to those found in the stock market

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work, where additional value-creating processes in existing combined heat and power (CHP) structures have been examined, is motivated by a political- and consumer-driven strive towards a bioeconomy and a stagnation for the existing business models in large parts of the CHP sector. The research is based on cases where the integration of flash pyrolysis for co-production of bio-oil, co-gasification for production of fuel gas and synthetic biofuels as well as leaching of extractable fuel components in existing CHP plants have been simulated. In particular, this work has focused on the CHP plants that utilize boilers of fluidized bed (FB) type, where the concept of coupling a separate FB reactor to the FB of the boiler forms an important basis for the analyses. In such dual fluidized bed (DFB) technology, heat is transferred from the boiler to the new rector that is operating with other fluidization media than air, thereby enabling other thermochemical processes than combustion to take place. The result of this work shows that broader operations at existing CHP plants have the potential to enable production of significant volumes of chemicals and/or fuels with high efficiency, while maintaining heat supply to external customers. Based on the insight that the technical preconditions for a broader operation are favourable, the motivation and ability among the incumbents in the Swedish CHP sector to participate in a transition of their operation towards a biorefinery was examined. The result of this assessment showed that the incumbents believe that a broader operation can create significant values for their own operations, the society and the environment, but that they lack both a strong motivation as well as important abilities to move into the new technological fields. If the concepts of broader production are widely implemented in the Swedish FB based CHP sector, this can substantially contribute in the transition towards a bioeconomy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strong convective events can produce extreme precipitation, hail, lightning or gusts, potentially inducing severe socio-economic impacts. These events have a relatively small spatial extension and, in most cases, a short lifetime. In this study, a model is developed for estimating convective extreme events based on large scale conditions. It is shown that strong convective events can be characterized by a Weibull distribution of radar-based rainfall with a low shape and high scale parameter value. A radius of 90km around a station reporting a convective situation turned out to be suitable. A methodology is developed to estimate the Weibull parameters and thus the occurrence probability of convective events from large scale atmospheric instability and enhanced near-surface humidity, which are usually found on a larger scale than the convective event itself. Here, the probability for the occurrence of extreme convective events is estimated from the KO-index indicating the stability, and relative humidity at 1000hPa. Both variables are computed from ERA-Interim reanalysis. In a first version of the methodology, these two variables are applied to estimate the spatial rainfall distribution and to estimate the occurrence of a convective event. The developed method shows significant skill in estimating the occurrence of convective events as observed at synoptic stations, lightning measurements, and severe weather reports. In order to take frontal influences into account, a scheme for the detection of atmospheric fronts is implemented. While generally higher instability is found in the vicinity of fronts, the skill of this approach is largely unchanged. Additional improvements were achieved by a bias-correction and the use of ERA-Interim precipitation. The resulting estimation method is applied to the ERA-Interim period (1979-2014) to establish a ranking of estimated convective extreme events. Two strong estimated events that reveal a frontal influence are analysed in detail. As a second application, the method is applied to GCM-based decadal predictions in the period 1979-2014, which were initialized every year. It is shown that decadal predictive skill for convective event frequencies over Germany is found for the first 3-4 years after the initialization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The main purpose of the study is to promote consideration of the issues and approaches available for costing sustainable buildings with a view to minimising cost overruns, occasioned by conservative whole-life cost estimates. The paper primarily looks at the impact of adopting continuity in whole-life cost models for zero carbon houses. Design/methodology/approach: The study embraces a mathematically based risk procedure based on the binomial theorem for analysing the cost implication of the Lighthouse zero-carbon house project. A practical application of the continuous whole-life cost model is developed and results are compared with existing whole-life cost techniques using finite element methods and Monte Carlo analysis. Findings: With standard whole-life costing, discounted present-value analysis tends to underestimate the cost of a project. Adopting continuity in whole-life cost models presents a clearer picture and profile of the economic realities and decision-choices confronting clients and policy-makers. It also expands the informative scope on the costs of zero-carbon housing projects. Research limitations/implications: A primary limitation in this work is its focus on just one property type as the unit of analysis. This research is also limited in its consideration of initial and running cost categories only. The capital cost figures for the Lighthouse are indicative rather than definitive. Practical implications: The continuous whole-life cost technique is a novel and innovative approach in financial appraisal [...] Benefits of an improved costing framework will be far-reaching in establishing effective policies aimed at client acceptance and optimally performing supply chain networks. Originality/value: The continuous whole-life costing pioneers an experimental departure from the stereo-typical discounting mechanism in standard whole-life costing procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A partir de la dinámica evolutiva de la economía de las Tecnologías de la Información y las Comunicaciones y el establecimiento de estándares mínimos de velocidad en distintos contextos regulatorios a nivel mundial, en particular en Colombia, en el presente artículo se presentan diversas aproximaciones empíricas para evaluar los efectos reales que conlleva el establecimiento de definiciones de servicios de banda ancha en el mercado de Internet fijo. Con base en los datos disponibles para Colombia sobre los planes de servicios de Internet fijo ofrecidos durante el periodo 2006-2012, se estima para los segmentos residencial y corporativo el proceso de difusión logístico modificado y el modelo de interacción estratégica para identificar los impactos generados sobre la masificación del servicio a nivel municipal y sobre las decisiones estratégicas que adoptan los operadores, respectivamente. Respecto a los resultados, se encuentra, por una parte, que las dos medidas regulatorias establecidas en Colombia en 2008 y 2010 presentan efectos significativos y positivos sobre el desplazamiento y el crecimiento de los procesos de difusión a nivel municipal. Por otra parte, se observa sustituibilidad estratégica en las decisiones de oferta de velocidad de descarga por parte de los operadores corporativos mientras que, a partir del análisis de distanciamiento de la velocidad ofrecida respecto al estándar mínimo de banda ancha, se demuestra que los proveedores de servicios residenciales tienden a agrupar sus decisiones de velocidad alrededor de los niveles establecidos por regulación.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este proyecto se origina en el interés de analizar las estrategias actuales de promoción de productos farmacéuticos, en el marco del debate sobre el efecto persuasivo o informativo que la publicidad directa tiene sobre los consumidores. El objetivo es determinar el efecto de las estrategias de promoción directa para consumidores (Direct to Consumer Advertising [DTCA]) sobre el comportamiento de compra de pacientes y las prescripciones que formulan los médicos en el mercado de productos bajo receta en Estados Unidos. Para tal fin se propuso realizar una monografía que incluyera una revisión de literatura de carácter argumentativo, consultando información de nivel secundario en bases de datos científicas cuyos contenidos obedecieran a criterios metodológicos determinados por la naturaleza argumentativa del estudio. Adicionalmente, se analizó el debate sobre estos anuncios a la luz de dos estudios realizados a pacientes con cáncer de seno, próstata y colon, liderados por el Pennsylvania Cancer Registry con los productos biofarmacéuticos Avodart® y Flomax®. Finalmente, la investigación se fundamentó en la relación del mercado farmacéutico en Estados Unidos con cada uno de los agentes que interactúan en él; consumidores, médicos prescriptores y empresas farmacéuticas, así como el valor que estos comparten través de dichas interacciones. Se concluye que el comportamiento de compra de los consumidores está determinado por la naturaleza de la patología que padecen y el comportamiento de los profesionales que prescriben a sus pacientes se ve influenciado por los anuncios DTCA.