11 resultados para Good, John Mason, 1764-1827.
em Scottish Institute for Research in Economics (SIRE) (SIRE), United Kingdom
Resumo:
We report results from an experiment that explores the empirical validity of correlated equilibrium, an important generalization of the Nash equilibrium concept. Specifically, we seek to understand the conditions under which subjects playing the game of Chicken will condition their behavior on private, third–party recommendations drawn from known distributions. In a “good–recommendations” treatment, the distribution we use is a correlated equilibrium with payoffs better than any symmetric payoff in the convex hull of Nash equilibrium payoff vectors. In a “bad–recommendations” treatment, the distribution is a correlated equilibrium with payoffs worse than any Nash equilibrium payoff vector. In a “Nash–recommendations” treatment, the distribution is a convex combination of Nash equilibrium outcomes (which is also a correlated equilibrium), and in a fourth “very–good–recommendations” treatment, the distribution yields high payoffs, but is not a correlated equilibrium. We compare behavior in all of these treatments to the case where subjects do not receive recommendations. We find that when recommendations are not given to subjects, behavior is very close to mixed–strategy Nash equilibrium play. When recommendations are given, behavior does differ from mixed–strategy Nash equilibrium, with the nature of the differ- ences varying according to the treatment. Our main finding is that subjects will follow third–party recommendations only if those recommendations derive from a correlated equilibrium, and further, if that correlated equilibrium is payoff–enhancing relative to the available Nash equilibria.
Resumo:
Faced with the problem of pricing complex contingent claims, an investor seeks to make his valuations robust to model uncertainty. We construct a notion of a model- uncertainty-induced utility function and show that model uncertainty increases the investor's eff ective risk aversion. Using the model-uncertainty-induced utility function, we extend the \No Good Deals" methodology of Cochrane and Sa a-Requejo [2000] to compute lower and upper good deal bounds in the presence of model uncertainty. We illustrate the methodology using some numerical examples.
Resumo:
This paper attempts to extend existing models of political agency to an environment in which voting may be divided between informed and instrumental, informed and ‘expressive’ (Brennan and Lomasky (1993)) and uninformed due to ‘rational irrationality’ (Caplan (2007)). It constructs a model where politicians may be good, bad or populist. Populists are more willing than good politicians to pander to voters who may choose inferior policies in a large-group electoral setting because their vote is insignificant compared with those that voters would choose were their vote decisive in determining the electoral outcome. Bad politicians would ideally like to extract tax revenue for their own ends. Initially we assume the existence of only good and populist politicians. The paper investigates the incentives for good politicians to pool with or separate from populists and focuses on three key issues – (1) how far the majority of voter’s preferences are from those held by the better informed incumbent politician (2) the extent to which the population exhibits rational irrationality and expressiveness (jointly labelled as emotional) and (3) the cost involved in persuading uninformed voters to change their views in terms of composing messages and spreading them. This paper goes on to consider how the inclusion of bad politicians may affect the behaviour of good politicians and suggests that a small amount of potential corruption may be socially useful. It is also argued that where bad politicians have an incentive to mimic the behaviour of good and populist politicians, the latter types of politician may have an incentive to separate from bad politicians by investing in costly public education signals. The paper also discusses the implications of the model for whether fiscal restraints should be soft or hard.
Resumo:
This paper addresses the hotly-debated question: do Chinese firms overinvest? A firm-level dataset of 100,000 firms over the period of 2000-07 is employed for this purpose. We initially calculate measures of investment efficiency, which is typically negatively associated with overinvestment. Despite wide disparities across various ownership groups, industries and regions, we find that corporate investment in China has become increasingly efficient over time. However, based on direct measures of overinvestment that we subsequently calculate, we find evidence of overinvestment for all types of firms, even in the most efficient and most profitable private sector. We find that the free cash flow hypothesis provides a good explanation for China‟s overinvestment, especially for the private sector, while in the state sector, overinvestment is attributable to the poor screening and monitoring of enterprises by banks.
Resumo:
We use a panel of over 120,000 Chinese firms of different ownership types over the period 2000-2007 to analyze the linkages between investment in fixed and working capital and financing constraints. We find that those firms characterized by high working capital display high sensitivities of investment in working capital to cash flow (WKS) and low sensitivities of investment in fixed capital to cash flow (FKS). We then construct and analyze firm-level FKS and WKS measures and find that, despite severe external financing constraints, those firms with low FKS and high WKS exhibit the highest fixed investment rates. This suggests that good working capital management may help firms to alleviate the effects of financing constraints on fixed investment.
Resumo:
This paper presents a model of a self-fulfilling price cycle in an asset market. Price oscillates deterministically even though the underlying environment is stationary. The mechanism that we uncover is driven by endogenous variation in the investment horizons of the different market participants, informed and uninformed. On even days, the price is high; on odd days it is low. On even days, informed traders are willing to jettison their good assets, knowing that they can buy them back the next day, when the price is low. The anticipated drop in price more than offsets any potential loss in dividend. Because of these asset sales, the informed build up their cash holdings. Understanding that the market is flooded with good assets, the uninformed traders are willing to pay a high price. But their investment horizon is longer than that of the informed traders: their intention is to hold the assets they purchase, not to resell. On odd days, the price is low because the uninformed recognise that the informed are using their cash holdings to cherry-pick good assets from the market. Now the uninformed, like the informed, are investing short-term. Rather than buy-and-hold as they do with assets purchased on even days, on odd days the uninformed are buying to sell. Notice that, at the root of the model, there lies a credit constraint. Although the informed are flush with cash on odd days, they are not deep pockets. On each cherry that they pick out of the market, they earn a high return: buying cheap, selling dear. However they don't have enough cash to strip the market of cherries and thereby bid the price up.
Resumo:
This paper is an investigation into the dynamics of asset markets with adverse selection a la Akerlof (1970). The particular question asked is: can market failure at some later date precipitate market failure at an earlier date? The answer is yes: there can be "contagious illiquidity" from the future back to the present. The mechanism works as follows. If the market is expected to break down in the future, then agents holding assets they know to be lemons (assets with low returns) will be forced to hold them for longer - they cannot quickly resell them. As a result, the effective difference in payoff between a lemon and a good asset is greater. But it is known from the static Akerlof model that the greater the payoff differential between lemons and non-lemons, the more likely is the market to break down. Hence market failure in the future is more likely to lead to market failure today. Conversely, if the market is not anticipated to break down in the future, assets can be readily sold and hence an agent discovering that his or her asset is a lemon can quickly jettison it. In effect, there is little difference in payoff between a lemon and a good asset. The logic of the static Akerlof model then runs the other way: the small payoff differential is unlikely to lead to market breakdown today. The conclusion of the paper is that the nature of today's market - liquid or illiquid - hinges critically on the nature of tomorrow's market, which in turn depends on the next day's, and so on. The tail wags the dog.
Resumo:
In this analysis, we examine the relationship between an individual's decision to volunteer and the average level of volunteering in the community where the individual resides. Our theoretical model is based on a coordination game , in which volunteering by others is informative regarding the benefit from volunteering. We demonstrate that the interaction between this information and one's private information makes it more likely that he or she will volunteer, given a higher level of contributions by his or her peers. We complement this theoretical work with an empirical analysis using Census 2000 Summary File 3 and Current Population Survey (CPS) 2004-2007 September supplement file data. We control for various individual and community characteristics, and employ robustness checks to verify the results of the baseline analysis. We additionally use an innovative instrumental variables strategy to account for reflection bias and endogeneity caused by selective sorting by individuals into neighborhoods, which allows us to argue for a causal interpretation. The empirical results in the baseline, as well as all robustness analyses, verify the main result of our theoretical model, and we employ a more general structure to further strengthen our results.
Resumo:
This paper compares how increases in experience versus increases in knowledge about a public good affect willingness to pay (WTP) for its provision. This is challenging because while consumers are often certain about their previous experiences with a good, they may be uncertain about the accuracy of their knowledge. We therefore design and conduct a field experiment in which treated subjects receive a precise and objective signal regarding their knowledge about a public good before estimating their WTP for it. Using data for two different public goods, we show qualitative equivalence of the effect of knowledge and experience on valuation for a public good. Surprisingly, though, we find that the causal effect of objective signals about the accuracy of a subject’s knowledge for a public good can dramatically affect their valuation for it: treatment causes an increase of $150-$200 in WTP for well-informed individuals. We find no such effect for less informed subjects. Our results imply that WTP estimates for public goods are not only a function of true information states of the respondents but beliefs about those information states.
Resumo:
In this study we elicit agents’ prior information set regarding a public good, exogenously give information treatments to survey respondents and subsequently elicit willingness to pay for the good and posterior information sets. The design of this field experiment allows us to perform theoretically motivated hypothesis testing between different updating rules: non-informative updating, Bayesian updating, and incomplete updating. We find causal evidence that agents imperfectly update their information sets. We also field causal evidence that the amount of additional information provided to subjects relative to their pre-existing information levels can affect stated WTP in ways consistent overload from too much learning. This result raises important (though familiar) issues for the use of stated preference methods in policy analysis.
Resumo:
In this analysis, we examine the relationship between an individual’s decision to volunteer and the average level of volunteering in the community where the individual resides. Our theoretical model is based on a coordination game , in which volunteering by others is informative regarding the benefit from volunteering. We demonstrate that the interaction between this information and one’s private information makes it more likely that he or she will volunteer, given a higher level of contributions by his or her peers. We complement this theoretical work with an empirical analysis using Census 2000 Summary File 3 and Current Population Survey (CPS) 2004-2007 September supplement file data. We control for various individual and community characteristics, and employ robustness checks to verify the results of the baseline analysis. We additionally use an innovative instrumental variables strategy to account for reflection bias and endogeneity caused by selective sorting by individuals into neighbourhoods, which allows us to argue for a causal interpretation. The empirical results in the baseline, as well as all robustness analyses, verify the main result of our theoretical model, and we employ a more general structure to further strengthen our results.