31 resultados para Discrete Regression and Qualitative Choice Models

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is twofold: firstly, to carry out a theoreticalreview of the most recent stated preference techniques used foreliciting consumers preferences and, secondly, to compare the empiricalresults of two dierent stated preference discrete choice approaches.They dier in the measurement scale for the dependent variable and,therefore, in the estimation method, despite both using a multinomiallogit. One of the approaches uses a complete ranking of full-profiles(contingent ranking), that is, individuals must rank a set ofalternatives from the most to the least preferred, and the other usesa first-choice rule in which individuals must select the most preferredoption from a choice set (choice experiment). From the results werealize how important the measurement scale for the dependent variablebecomes and, to what extent, procedure invariance is satisfied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The promotion of energy-efficient appliances is necessary to reduce the energetic and environmental burden of the household sector. However, many studies have reported that a typical consumer underestimates the benefits of energy-saving investment on the purchase of household electric appliances. To analyze this energy-efficiency gap problem, many scholars have estimated implicit discount rates that consumers use for energy-consuming durables. Although both hedonic and choice models have been used in previous studies, a comparison between two models has not yet been done. This study uses point of sale data about Japanese residential air conditioners and estimates implicit discounts rates with both hedonic and choice models. Both models demonstrate that a typical consumer underinvests in energy efficiency. Although choice models estimate a lower implicit discount rate than hedonic models, the latter models estimate the values of other product characteristics more consistently than choice models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The first generation models of currency crises have often been criticized because they predict that, in the absence of very large triggering shocks, currency attacks should be predictable and lead to small devaluations. This paper shows that these features of first generation models are not robust to the inclusion of private information. In particular, this paper analyzes a generalization of the Krugman-Flood-Garber (KFG) model, which relaxes the assumption that all consumers are perfectly informed about the level of fundamentals. In this environment, the KFG equilibrium of zero devaluation is only one of many possible equilibria. In all the other equilibria, the lack of perfect information delays the attack on the currency past the point at which the shadow exchange rate equals the peg, giving rise to unpredictable and discrete devaluations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce and investigate a series of models for an infection of a diplodiploid host species by the bacterial endosymbiont Wolbachia. The continuous models are characterized by partial vertical transmission, cytoplasmic incompatibility and fitness costs associated with the infection. A particular aspect of interest is competitions between mutually incompatible strains. We further introduce an age-structured model that takes into account different fertility and mortality rates at different stages of the life cycle of the individuals. With only a few parameters, the ordinary differential equation models exhibit already interesting dynamics and can be used to predict criteria under which a strain of bacteria is able to invade a population. Interestingly, but not surprisingly, the age-structured model shows significant differences concerning the existence and stability of equilibrium solutions compared to the unstructured model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a real data set of claims amounts where costs related to damage are recorded separately from those related to medical expenses. Only claims with positive costs are considered here. Two approaches to density estimation are presented: a classical parametric and a semi-parametric method, based on transformation kernel density estimation. We explore the data set with standard univariate methods. We also propose ways to select the bandwidth and transformation parameters in the univariate case based on Bayesian methods. We indicate how to compare the results of alternative methods both looking at the shape of the overall density domain and exploring the density estimates in the right tail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estudi realitzat a partir d’una estada a la Stanford University School of Medicine. Division of Radiation Oncology, Estats Units, entre 2010 i 2012. Durant els dos anys de beca postdoctoral he estat treballant en dos projectes diferents. En primer lloc, i com a continuació d'estudis previs del grup, volíem estudiar la causa de les diferències en nivells d'hipòxia que havíem observat en models de càncer de pulmó. La nostra hipòtesi es basava en el fet que aquestes diferències es devien a la funcionalitat de la vasculatura. Vam utilitzar dos models preclínics: un en què els tumors es formaven espontàniament als pulmons i l'altre on nosaltres injectàvem les cèl•lules de manera subcutània. Vam utilitzar tècniques com la ressonància magnètica dinàmica amb agent de contrast (DCE-MRI) i l'assaig de perfusió amb el Hoeschst 33342 i ambdues van demostrar que la funcionalitat de la vasculatura dels tumors espontanis era molt més elevada comparada amb la dels tumors subcutanis. D'aquest estudi, en podem concloure que les diferències en els nivells d'hipòxia en els diferents models tumorals de càncer de pulmó podrien ser deguts a la variació en la formació i funcionalitat de la vasculatura. Per tant, la selecció de models preclínics és essencial, tant pels estudi d'hipòxia i angiogènesi, com per a teràpies adreçades a aquests fenòmens. L'altre projecte que he estat desenvolupant es basa en l'estudi de la radioteràpia i els seus possibles efectes a l’hora de potenciar l'autoregeneració del tumor a partir de les cèl•lules tumorals circulants (CTC). Aquest efecte s'ha descrit en alguns models tumorals preclínics. Per tal de dur a terme els nostres estudis, vam utilitzar una línia tumoral de càncer de mama de ratolí, marcada permanentment amb el gen de Photinus pyralis o sense marcar i vam fer estudis in vitro i in vivo. Ambdós estudis han demostrat que la radiació tumoral promou la invasió cel•lular i l'autoregeneració del tumor per CTC. Aquest descobriment s'ha de considerar dins d'un context de radioteràpia clínica per tal d'aconseguir el millor tractament en pacients amb nivells de CTC elevats.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a theory of context-dependent choice in which a consumer's attention is drawnto salient attributes of goods, such as quality or price. An attribute is salient for a good when itstands out among the good's attributes, relative to that attribute's average level in the choice set (orgenerally, the evoked set). Consumers attach disproportionately high weight to salient attributesand their choices are tilted toward goods with higher quality/price ratios. The model accounts fora variety of disparate evidence, including decoy effects, context-dependent willingness to pay, andlarge shifts in demand in response to price shocks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the short run correlation of inflation and money growth. We study whether a model of learning can do better than a model of rational expectations, we focus our study on countries of high inflation. We take the money process as an exogenous variable, estimated from the data through a switching regime process. We findthat the rational expectations model and the model of learning both offer very good explanations for the joint behavior of money and prices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When actuaries face with the problem of pricing an insurance contract that contains different types of coverage, such as a motor insurance or homeowner's insurance policy, they usually assume that types of claim are independent. However, this assumption may not be realistic: several studies have shown that there is a positive correlation between types of claim. Here we introduce different regression models in order to relax the independence assumption, including zero-inflated models to account for excess of zeros and overdispersion. These models have been largely ignored to multivariate Poisson date, mainly because of their computational di±culties. Bayesian inference based on MCMC helps to solve this problem (and also lets us derive, for several quantities of interest, posterior summaries to account for uncertainty). Finally, these models are applied to an automobile insurance claims database with three different types of claims. We analyse the consequences for pure and loaded premiums when the independence assumption is relaxed by using different multivariate Poisson regression models and their zero-inflated versions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interaction effects are usually modeled by means of moderated regression analysis. Structural equation models with non-linear constraints make it possible to estimate interaction effects while correcting formeasurement error. From the various specifications, Jöreskog and Yang's(1996, 1998), likely the most parsimonious, has been chosen and further simplified. Up to now, only direct effects have been specified, thus wasting much of the capability of the structural equation approach. This paper presents and discusses an extension of Jöreskog and Yang's specification that can handle direct, indirect and interaction effects simultaneously. The model is illustrated by a study of the effects of an interactive style of use of budgets on both company innovation and performance

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several studies have reported high performance of simple decision heuristics multi-attribute decision making. In this paper, we focus on situations where attributes are binary and analyze the performance of Deterministic-Elimination-By-Aspects (DEBA) and similar decision heuristics. We consider non-increasing weights and two probabilistic models for the attribute values: one where attribute values are independent Bernoulli randomvariables; the other one where they are binary random variables with inter-attribute positive correlations. Using these models, we show that good performance of DEBA is explained by the presence of cumulative as opposed to simple dominance. We therefore introduce the concepts of cumulative dominance compliance and fully cumulative dominance compliance and show that DEBA satisfies those properties. We derive a lower bound with which cumulative dominance compliant heuristics will choose a best alternative and show that, even with many attributes, this is not small. We also derive an upper bound for the expected loss of fully cumulative compliance heuristics and show that this is moderateeven when the number of attributes is large. Both bounds are independent of the values ofthe weights.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several methods and approaches for measuring parameters to determine fecal sources of pollution in water have been developed in recent years. No single microbial or chemical parameter has proved sufficient to determine the source of fecal pollution. Combinations of parameters involving at least one discriminating indicator and one universal fecal indicator offer the most promising solutions for qualitative and quantitative analyses. The universal (nondiscriminating) fecal indicator provides quantitative information regarding the fecal load. The discriminating indicator contributes to the identification of a specific source. The relative values of the parameters derived from both kinds of indicators could provide information regarding the contribution to the total fecal load from each origin. It is also essential that both parameters characteristically persist in the environment for similar periods. Numerical analysis, such as inductive learning methods, could be used to select the most suitable and the lowest number of parameters to develop predictive models. These combinations of parameters provide information on factors affecting the models, such as dilution, specific types of animal source, persistence of microbial tracers, and complex mixtures from different sources. The combined use of the enumeration of somatic coliphages and the enumeration of Bacteroides-phages using different host specific strains (one from humans and another from pigs), both selected using the suggested approach, provides a feasible model for quantitative and qualitative analyses of fecal source identification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several methods and approaches for measuring parameters to determine fecal sources of pollution in water have been developed in recent years. No single microbial or chemical parameter has proved sufficient to determine the source of fecal pollution. Combinations of parameters involving at least one discriminating indicator and one universal fecal indicator offer the most promising solutions for qualitative and quantitative analyses. The universal (nondiscriminating) fecal indicator provides quantitative information regarding the fecal load. The discriminating indicator contributes to the identification of a specific source. The relative values of the parameters derived from both kinds of indicators could provide information regarding the contribution to the total fecal load from each origin. It is also essential that both parameters characteristically persist in the environment for similar periods. Numerical analysis, such as inductive learning methods, could be used to select the most suitable and the lowest number of parameters to develop predictive models. These combinations of parameters provide information on factors affecting the models, such as dilution, specific types of animal source, persistence of microbial tracers, and complex mixtures from different sources. The combined use of the enumeration of somatic coliphages and the enumeration of Bacteroides-phages using different host specific strains (one from humans and another from pigs), both selected using the suggested approach, provides a feasible model for quantitative and qualitative analyses of fecal source identification.