45 resultados para BIASES
Resumo:
We present a new model of money management, in which investors delegate portfolio management to professionals based not only on performance, but also on trust. Trust in the manager reduces an investor's perception of the riskiness of a given investment, and allows managers to charge higher fees to investors who trust them more. Money managers compete for investor funds by setting their fees, but because of trust the fees do not fall to costs. In the model, 1) managers consistently underperform the market net of fees but investors still prefer to delegate money management to taking risk on their own, 2) fees involve sharing of expected returns between managers and investors, with higher fees in riskier products, 3) managers pander to investors when investors exhibit biases in their beliefs, and do not correct misperceptions, and 4) despite long run benefits from better performance, the profits from pandering to trusting investors discourage managers from pursuing contrarian strategies relative to the case with no trust. We show how trust-mediated money management renders arbitrage less effective, and may help destabilize financial markets.
Resumo:
Among the underlying assumptions of the Black-Scholes option pricingmodel, those of a fixed volatility of the underlying asset and of aconstantshort-term riskless interest rate, cause the largest empirical biases. Onlyrecently has attention been paid to the simultaneous effects of thestochasticnature of both variables on the pricing of options. This paper has tried toestimate the effects of a stochastic volatility and a stochastic interestrate inthe Spanish option market. A discrete approach was used. Symmetricand asymmetricGARCH models were tried. The presence of in-the-mean and seasonalityeffectswas allowed. The stochastic processes of the MIBOR90, a Spanishshort-terminterest rate, from March 19, 1990 to May 31, 1994 and of the volatilityofthe returns of the most important Spanish stock index (IBEX-35) fromOctober1, 1987 to January 20, 1994, were estimated. These estimators wereused onpricing Call options on the stock index, from November 30, 1993 to May30, 1994.Hull-White and Amin-Ng pricing formulas were used. These prices werecomparedwith actual prices and with those derived from the Black-Scholesformula,trying to detect the biases reported previously in the literature. Whereasthe conditional variance of the MIBOR90 interest rate seemed to be freeofARCH effects, an asymmetric GARCH with in-the-mean and seasonalityeffectsand some evidence of persistence in variance (IEGARCH(1,2)-M-S) wasfoundto be the model that best represent the behavior of the stochasticvolatilityof the IBEX-35 stock returns. All the biases reported previously in theliterature were found. All the formulas overpriced the options inNear-the-Moneycase and underpriced the options otherwise. Furthermore, in most optiontrading, Black-Scholes overpriced the options and, because of thetime-to-maturityeffect, implied volatility computed from the Black-Scholes formula,underestimatedthe actual volatility.
Resumo:
We present a model of intuitive inference, called local thinking, in which anagent combines data received from the external world with information retrieved frommemory to evaluate a hypothesis. In this model, selected and limited recall ofinformation follows a version of the respresentativeness heuristic. The model canaccount for some of the evidence on judgment biases, including conjunction anddisjunction fallacies, but also for several anomalies related to demand for insurance.
Resumo:
The paper contrasts empirically the results of alternative methods for estimating thevalue and the depreciation of mineral resources. The historical data of Mexico andVenezuela, covering the period 1920s-1980s, is used to contrast the results of severalmethods. These are the present value, the net price method, the user cost method andthe imputed income method. The paper establishes that the net price and the user costare not competing methods as such, but alternative adjustments to different scenariosof closed and open economies. The results prove that the biases of the methods, ascommonly described in the theoretical literature, only hold under the most restrictedscenario of constant rents over time. It is argued that the difference between what isexpected to happen and what actually did happen is for the most part due to a missingvariable, namely technological change. This is an important caveat to therecommendations made based on these models.
Resumo:
A new debate over the speed of convergence in per capita income across economies is going on. Cross sectional estimates support the idea of slow convergence of about two percent per year. Panel data estimates support the idea of fast convergence of five, ten or even twenty percent per year. This paper shows that, if you ``do it right'', even the panel data estimation method yields the result of slow convergence of about two percent per year.
Resumo:
We argue that during the crystallization of common and civil law in the 19th century, the optimal degree of discretion in judicial rulemaking, albeit influenced by the comparative advantages of both legislative and judicial rulemaking, was mainly determined by the anti-market biases of the judiciary. The different degrees of judicial discretion adopted in both legal traditions were thus optimally adapted to different circumstances, mainly rooted in the unique, market-friendly, evolutionary transition enjoyed by English common law as opposed to the revolutionary environment of the civil law. On the Continent, constraining judicial discretion was essential for enforcing freedom of contract and establishing a market economy. The ongoing debasement of pro-market fundamentals in both branches of the Western legal system is explained from this perspective as a consequence of increased perceptions of exogenous risks and changes in the political system, which favored the adoption of sharing solutions and removed the cognitive advantage of parliaments and political leaders.
Resumo:
This paper studies two important reasons why people violate procedure invariance, loss aversion and scale compatibility. The paper extends previous research on loss aversion and scale compatibility by studying loss aversion and scale compatibility simultaneously, by looking at a new decision domain, medical decision analysis, and by examining the effect of loss aversion and scale compatibility on "well-contemplated preferences." We find significant evidence both of loss aversion and scale compatibility. However, the sizes of the biases due to loss aversion and scale compatibility vary over trade-offs and most participants do not behave consistently according to loss aversion or scale compatibility. In particular, the effect of loss aversion in medical trade-offs decreases with duration. These findings are encouraging for utility measurement and prescriptive decision analysis. There appear to exist decision contexts in which the effects of loss aversion and scale compatibility can be minimized and utilities can be measured that do not suffer from these distorting factors.
Resumo:
In this paper we analyze the sensitivity of the labour market decisions of workers close toretirement with respect to the incentives created by public regulations. We improve upon the extensiveprior literature on the effect of pension incentives on retirement in two ways. First, bymodeling the transitions between employment, unemployment and retirement in a simultaneousmanner, paying special attention to the transition from unemployment to retirement (which is particularlyimportant in Spain). Second, by considering the influence of unobserved heterogeneity inthe estimation of the effect of our (carefully constructed) incentive variables.Using administrative data, we find that, when properly defined, economic incentives have astrong impact on labour market decisions in Spain. Unemployment regulations are shown to be particularlyinfluential for retirement behaviour, along with the more traditional determinants linked tothe pension system. Pension variables also have a major bearing on both workers reemploymentdecisions and on the strategic actions of employers. The quantitative impact of the incentives, however,is greatly affected by the existence of unobserved heterogeneity among workers. Its omissionleads to sizable biases in the assessment of the sensitivity to economic incentives, a finding thathas clear consequences for the credibility of any model-based policy analysis. We confirm theimportance of this potential problem in one especially interesting instance: the reform of earlyretirement provisions undertaken in Spain in 2002. We use a difference-in-difference approach tomeasure the behavioural reaction to this change, finding a large overestimation when unobservedheterogeneity is not taken into account.
Resumo:
This paper proposes an argument that explains incumbency advantage without recurring to the collective irresponsibility of legislatures. For that purpose, we exploit the informational value of incumbency: incumbency confers voters information about governing politicians not available from challengers. Because there are many reasons for high reelection rates different from incumbency status, we propose a measure of incumbency advantage that improves the use of pure reelection success. We also study the relationship between incumbency advantage and ideological and selection biases. An important implication of our analysis is that the literature linking incumbency and legislature irresponsibility most likely provides an overestimation of the latter.
Resumo:
Recent research has highlighted the notion that people can make judgmentsand choices by means of two systems that are labeled here tacit(or intuitive) and deliberate (or analytic). Whereas most decisionstypically involve both systems, this chapter examines the conditions underwhich each system is liable to be more effective. This aims to illuminatethe age-old issue of whether and when people should trust intuition or analysis. To do this, a framework is presented to understand how thetacit and deliberate systems work in tandem. Distinctions are also madebetween the types of information typically used by both systems as wellas the characteristics of environments that facilitate or hinder accuratelearning by the tacit system. Next, several experiments that havecontrasted intuitive and analytic modes on the same tasks are reviewed.Together, the theoretical framework and experimental evidence leads tospecifying the trade-off that characterizes their relative effectiveness.Tacit system responses can be subject to biases. In making deliberate systemresponses, however, people might not be aware of the correct rule to dealwith the task they are facing and/or make errors in executing it. Whethertacit or deliberate responses are more valid in particular circumstancesrequires assessing this trade-off. In this, the probability of making errorsin deliberate thought is postulated to be a function of the analytical complexityof the task as perceived by the person. Thus the trade-off is one of bias (inimplicit responses) versus analytical complexity (when tasks are handled indeliberate mode). Finally, it is noted that whereas much attention has beenpaid in the past to helping people make decisions in deliberate mode, effortsshould also be directed toward improving ability to make decisions intacit mode since the effectiveness of decisions clearly depends on both. Thistherefore represents an important frontier for research.
Resumo:
This paper demonstrates that, unlike what the conventional wisdom says, measurement error biases in panel data estimation of convergence using OLS with fixed effects are huge, not trivial. It does so by way of the "skipping estimation"': taking data from every m years of the sample (where m is an integer greater than or equal to 2), as opposed to every single year. It is shown that the estimated speed of convergence from the OLS with fixed effects is biased upwards by as much as 7 to 15%.
Resumo:
Reductions in firing costs are often advocated as a way of increasingthe dynamism of labour markets in both developed and less developed countries. Evidence from Europe and the U.S. on the impact of firing costs has, however, been mixed. Moreover, legislative changes both in Europe and the U.S. have been limited. This paper, instead, examines the impact of the Colombian Labour Market Reform of 1990, which substantially reduced dismissal costs. I estimate the incidence of a reduction in firing costs on worker turnover by exploiting the temporal change in the Colombian labour legislation as well as the variability in coverage between formal and informal sector workers. Using a grouping estimator to control for common aggregate shocks and selection, I find that the exit hazard rates into and out of unemployment increased after the reform by over 1% for formal workers (covered by the legislation) relative to informal workers (uncovered). The increase of the hazards implies a net decrease in unemployment of a third of a percentage point, which accounts for about one quarter of the fall in unemployment during the period of study.
Resumo:
Research on judgment and decision making presents a confusing picture of human abilities. For example, much research has emphasized the dysfunctional aspects of judgmental heuristics, and yet, other findings suggest that these can be highly effective. A further line of research has modeled judgment as resulting from as if linear models. This paper illuminates the distinctions in these approaches by providing a common analytical framework based on the central theoretical premise that understanding human performance requires specifying how characteristics of the decision rules people use interact with the demands of the tasks they face. Our work synthesizes the analytical tools of lens model research with novel methodology developed to specify the effectiveness of heuristics in different environments and allows direct comparisons between the different approaches. We illustrate with both theoretical analyses and simulations. We further link our results to the empirical literature by a meta-analysis of lens model studies and estimate both human andheuristic performance in the same tasks. Our results highlight the trade-off betweenlinear models and heuristics. Whereas the former are cognitively demanding, the latterare simple to use. However, they require knowledge and thus maps of when andwhich heuristic to employ.
Resumo:
A new statistical parallax method using the Maximum Likelihood principle is presented, allowing the simultaneous determination of a luminosity calibration, kinematic characteristics and spatial distribution of a given sample. This method has been developed for the exploitation of the Hipparcos data and presents several improvements with respect to the previous ones: the effects of the selection of the sample, the observational errors, the galactic rotation and the interstellar absorption are taken into account as an intrinsic part of the formulation (as opposed to external corrections). Furthermore, the method is able to identify and characterize physically distinct groups in inhomogeneous samples, thus avoiding biases due to unidentified components. Moreover, the implementation used by the authors is based on the extensive use of numerical methods, so avoiding the need for simplification of the equations and thus the bias they could introduce. Several examples of application using simulated samples are presented, to be followed by applications to real samples in forthcoming articles.
Resumo:
A new statistical parallax method using the Maximum Likelihood principle is presented, allowing the simultaneous determination of a luminosity calibration, kinematic characteristics and spatial distribution of a given sample. This method has been developed for the exploitation of the Hipparcos data and presents several improvements with respect to the previous ones: the effects of the selection of the sample, the observational errors, the galactic rotation and the interstellar absorption are taken into account as an intrinsic part of the formulation (as opposed to external corrections). Furthermore, the method is able to identify and characterize physically distinct groups in inhomogeneous samples, thus avoiding biases due to unidentified components. Moreover, the implementation used by the authors is based on the extensive use of numerical methods, so avoiding the need for simplification of the equations and thus the bias they could introduce. Several examples of application using simulated samples are presented, to be followed by applications to real samples in forthcoming articles.