95 resultados para choice test
Resumo:
This paper analyzes the nature of health care provider choice inthe case of patient-initiated contacts, with special reference toa National Health Service setting, where monetary prices are zeroand general practitioners act as gatekeepers to publicly financedspecialized care. We focus our attention on the factors that mayexplain the continuously increasing use of hospital emergencyvisits as opposed to other provider alternatives. An extendedversion of a discrete choice model of demand for patient-initiatedcontacts is presented, allowing for individual and town residencesize differences in perceived quality (preferences) betweenalternative providers and including travel and waiting time asnon-monetary costs. Results of a nested multinomial logit model ofprovider choice are presented. Individual choice betweenalternatives considers, in a repeated nested structure, self-care,primary care, hospital and clinic emergency services. Welfareimplications and income effects are analyzed by computingcompensating variations, and by simulating the effects of userfees by levels of income. Results indicate that compensatingvariation per visit is higher than the direct marginal cost ofemergency visits, and consequently, emergency visits do not appearas an inefficient alternative even for non-urgent conditions.
Resumo:
A general formalism on stochastic choice is presented. Tje Rationalizability and Recoverability (Identification) problems are discussed. For the identification issue parametric examples are analyzed by means of techniques of mathematical tomography (Random transforms).
Resumo:
The effectiveness of pre-play communication in achieving efficientoutcomes has long been a subject of controversy. In some environments,cheap talk may help to achieve coordination. However, Aumannconjectures that, in a variant of the Stag Hunt game, a signal forefficient play is not self-enforcing and concludes that an "agreementto play [the efficient outcome] conveys no information about what theplayers will do." Harsanyi and Selten (1988) cite this example as anillustration of risk-dominance vs. payoff-dominance. Farrell and Rabin(1996) agree with the logic, but suspect that cheap talk willnonetheless achieve efficiency. The conjecture is tested with one-waycommunication. When the sender first chooses a signal and then anaction, there is impressive coordination: a 94% probability for thepotentially efficient (but risky) play, given a signal for efficientplay. Without communication, efforts to achieve efficiency wereunsuccessful, as the proportion of B moves is only 35%. I also test ahypothesis that the order of the action and the signal affects theresults, finding that the decision order is indeed important. WhileAumann s conjecture is behaviorally disconfirmed when the signal isdetermined initially, the signal s credibility seems to be much moresuspect when the sender is known to have first chosen an action, andthe results are not statistically distinguishable from those whenthere is no signal. Some applications and issues in communication andcoordination are discussed.
Resumo:
This paper argues that low-stakes test scores, available in surveys, may be partially determinedby test-taking motivation, which is associated with personality traits but not with cognitiveability. Therefore, such test score distributions may not be informative regarding cognitiveability distributions. Moreover, correlations, found in survey data, between high test scoresand economic success may be partially caused by favorable personality traits. To demonstratethese points, I use the coding speed test that was administered without incentives to NationalLongitudinal Survey of Youth 1979 (NLSY) participants. I suggest that due to its simplicityits scores may especially depend on individuals' test-taking motivation. I show that controllingfor conventional measures of cognitive skills, the coding speed scores are correlated with futureearnings of male NLSY participants. Moreover, the coding speed scores of highly motivated,though less educated, population (potential enlists to the armed forces) are higher than NLSYparticipants' scores. I then use controlled experiments to show that when no performance-basedincentives are provided, participants' characteristics, but not their cognitive skills, affect effortinvested in the coding speed test. Thus, participants with the same ability (measured by theirscores on an incentivized test) have significantly different scores on tests without performance-based incentives.
Resumo:
This article presents, discusses and tests the hypothesis that it is the number of parties what can explain the choice of electoral systems, rather than the other way round. Already existing political parties tend to choose electoral systems that, rather than generate new party systems by themselves, will crystallize, consolidate or reinforce previously existing party configurations. A general model develops the argument and presents the concept of 'behavioral-institutional equilibrium' to account for the relation between electoral systems and party systems. The most comprehensive dataset and test of these notions to date, encompassing 219 elections in 87 countries since the 19th century, are presented. The analysis gives strong support to the hypotheses that political party configurations dominated by a few parties tend to establish majority rule electoral systems, while multiparty systems already existed before the introduction of proportional representation. It also offers the new theoretical proposition that strategic party choice of electoral systems leads to a general trend toward proportional representation over time.
Resumo:
The origins of electoral systems have received scant attention in the literature. Looking at the history of electoral rules in the advanced world in the last century, this paper shows that the existing wide variation in electoral rules across nations can be traced to the strategic decisions that the current ruling parties, anticipating the coordinating consequences of different electoral regimes, make to maximize their representation according to the following conditions. On the one hand, as long as the electoral arena does not change substantially and the current electoral regime serves the ruling parties well, the latter have no incentives to modify the electoral regime. On the other hand, as soon as the electoral arena changes (due to the entry of new voters or a change in their preferences), the ruling parties will entertain changing the electoral system, depending on two main conditions: the emergence of new parties and the coordinating capacities of the old ruling parties. Accordingly, if the new parties are strong, the old parties shift from plurality/majority rules to proportional representation (PR) only if the latter are locked into a 'non-Duvergerian' equilibrium; i.e. if no old party enjoys a dominant position (the case of most small European states)--conversely, they do not if a Duvergerian equilibrium exists (the case of Great Britain). Similarly, whenever the new entrants are weak, a non-PR system is maintained, regardless of the structure of the old party system (the case of the USA). The paper discusses as well the role of trade and ethnic and religious heterogeneity in the adoption of PR rules.
Resumo:
This paper presents a dynamic choice model in the attributespace considering rational consumers that discount the future. In lightof the evidence of several state-dependence patterns, the model isfurther extended by considering a utility function that allows for thedifferent types of behavior described in the literature: pure inertia,pure variety seeking and hybrid. The model presents a stationaryconsumption pattern that can be inertial, where the consumer only buysone product, or a variety-seeking one, where the consumer buys severalproducts simultane-ously. Under the inverted-U marginal utilityassumption, the consumer behaves inertial among the existing brands forseveral periods, and eventually, once the stationary levels areapproached, the consumer turns to a variety-seeking behavior. An empiricalanalysis is run using a scanner database for fabric softener andsignificant evidence of hybrid behavior for most attributes is found,which supports the functional form considered in the theory.
Resumo:
The effectiveness of decision rules depends on characteristics of bothrules and environments. A theoretical analysis of environments specifiesthe relative predictive accuracies of the lexicographic rule 'take-the-best'(TTB) and other simple strategies for binary choice. We identify threefactors: how the environment weights variables; characteristics of choicesets; and error. For cases involving from three to five binary cues, TTBis effective across many environments. However, hybrids of equal weights(EW) and TTB models are more effective as environments become morecompensatory. In the presence of error, TTB and similar models do not predictmuch better than a naïve model that exploits dominance. We emphasizepsychological implications and the need for more complete theories of theenvironment that include the role of error.
Resumo:
Several studies have reported high performance of simple decision heuristics multi-attribute decision making. In this paper, we focus on situations where attributes are binary and analyze the performance of Deterministic-Elimination-By-Aspects (DEBA) and similar decision heuristics. We consider non-increasing weights and two probabilistic models for the attribute values: one where attribute values are independent Bernoulli randomvariables; the other one where they are binary random variables with inter-attribute positive correlations. Using these models, we show that good performance of DEBA is explained by the presence of cumulative as opposed to simple dominance. We therefore introduce the concepts of cumulative dominance compliance and fully cumulative dominance compliance and show that DEBA satisfies those properties. We derive a lower bound with which cumulative dominance compliant heuristics will choose a best alternative and show that, even with many attributes, this is not small. We also derive an upper bound for the expected loss of fully cumulative compliance heuristics and show that this is moderateeven when the number of attributes is large. Both bounds are independent of the values ofthe weights.
Resumo:
Customer choice behavior, such as 'buy-up' and 'buy-down', is an importantphe-nomenon in a wide range of industries. Yet there are few models ormethodologies available to exploit this phenomenon within yield managementsystems. We make some progress on filling this void. Specifically, wedevelop a model of yield management in which the buyers' behavior ismodeled explicitly using a multi-nomial logit model of demand. Thecontrol problem is to decide which subset of fare classes to offer ateach point in time. The set of open fare classes then affects the purchaseprobabilities for each class. We formulate a dynamic program todetermine the optimal control policy and show that it reduces to a dynamicnested allocation policy. Thus, the optimal choice-based policy caneasily be implemented in reservation systems that use nested allocationcontrols. We also develop an estimation procedure for our model based onthe expectation-maximization (EM) method that jointly estimates arrivalrates and choice model parameters when no-purchase outcomes areunobservable. Numerical results show that this combined optimization-estimation approach may significantly improve revenue performancerelative to traditional leg-based models that do not account for choicebehavior.
Resumo:
This paper studies two important reasons why people violate procedure invariance, loss aversion and scale compatibility. The paper extends previous research on loss aversion and scale compatibility by studying loss aversion and scale compatibility simultaneously, by looking at a new decision domain, medical decision analysis, and by examining the effect of loss aversion and scale compatibility on "well-contemplated preferences." We find significant evidence both of loss aversion and scale compatibility. However, the sizes of the biases due to loss aversion and scale compatibility vary over trade-offs and most participants do not behave consistently according to loss aversion or scale compatibility. In particular, the effect of loss aversion in medical trade-offs decreases with duration. These findings are encouraging for utility measurement and prescriptive decision analysis. There appear to exist decision contexts in which the effects of loss aversion and scale compatibility can be minimized and utilities can be measured that do not suffer from these distorting factors.
Resumo:
Whereas people are typically thought to be better off with more choices, studiesshow that they often prefer to choose from small as opposed to large sets of alternatives.We propose that satisfaction from choice is an inverted U-shaped function of thenumber of alternatives. This proposition is derived theoretically by considering thebenefits and costs of different numbers of alternatives and is supported by fourexperimental studies. We also manipulate the perceptual costs of information processingand demonstrate how this affects the resulting satisfaction function. We furtherindicate that satisfaction when choosing from a given set is diminished if people aremade aware of the existence of other choice sets. The role of individual differences insatisfaction from choice is documented by noting effects due to gender and culture. Weconclude by emphasizing the need to have an explicit rationale for knowing how muchchoice is enough.
Resumo:
In a world with two countries which differ in size, we study theimpact of (the speed of) trade liberalization on firms' profitsand total welfare of the countries involved. Firms correctlyanticipate the pace of trade liberalization and take it intoaccount when deciding on their product choices, which areendogenously determined at the beginning of the game. Competitionin the marketplace then occurs either on quantities or on prices.As long as the autarkic phase continues, local firms are nationalmonopolists. When trade liberalization occurs, firms compete in aninternational duopoly. We analyze trade effects by using twodifferent models of product differentiation. Across all thespecifications adopted (and independently of the price v. quantitycompetition hypothesis), total welfare always unambiguously riseswith the speed of trade liberalization: Possible losses by firmsare always outweighed by consumers' gains, which come under theform of lower prices, enlarged variety of higher average qualitiesavailable. The effect on profits depends on the type of industryanalyzed. Two results in particular seem to be worth of mention.With vertical product differentiation and fixed costs of qualityimprovements, the expected size of the market faced by the firmsdetermines the incentive to invest in quality. The longer the periodof autarky, the lower the possibility that the firm from the smallcountry would be producing the high quality and be the leader in theinternational market when it opens. On the contrary, when trade opensimmediately, national markets do not play any role and firms fromdifferent countries have the same opportunity to become the leader.Hence, immediate trade liberalization might be in the interest ofproducers in the small country. In general, the lower the size of thesmall country, the more likely its firm will gain from tradeliberalization. Losses from the small country firm can arise when itis relegated to low quality good production and the domestic marketsize is not very small. With horizontal product differentiation (thehomogeneous good case being a limit case of it when costs ofdifferentiation tend to infinity), investments in differentiationbenefit both firms in equal manner. Firms from the small country do notrun the risk of being relegated to a lower competitive position undertrade. As a result, they would never lose from it. Instead, firms fromthe large country may still incur losses from the opening of trade whenthe market expansion effect is low (i.e. when the country is very largerelative to the other).
Resumo:
We present a theory of context-dependent choice in which a consumer's attention is drawnto salient attributes of goods, such as quality or price. An attribute is salient for a good when itstands out among the good's attributes, relative to that attribute's average level in the choice set (orgenerally, the evoked set). Consumers attach disproportionately high weight to salient attributesand their choices are tilted toward goods with higher quality/price ratios. The model accounts fora variety of disparate evidence, including decoy effects, context-dependent willingness to pay, andlarge shifts in demand in response to price shocks.
Resumo:
This paper tests the internal consistency of time trade-off utilities.We find significant violations of consistency in the direction predictedby loss aversion. The violations disappear for higher gauge durations.We show that loss aversion can also explain that for short gaugedurations time trade-off utilities exceed standard gamble utilities. Ourresults suggest that time trade-off measurements that use relativelyshort gauge durations, like the widely used EuroQol algorithm(Dolan 1997), are affected by loss aversion and lead to utilities thatare too high.