837 resultados para Subjective-probability
Resumo:
This paper revisits the debate over the importance of absolute vs. relative income as a correlate of subjective well-being using data from Bangladesh, one of the poorest countries in the world with high levels of corruption and poor governance. We do so by combining household data with population census and village survey records. Our results show that conditional on own household income, respondents report higher satisfaction levels when they experience an increase in their income over the past years. More importantly, individuals who report their income to be lower than their neighbours in the village also report less satisfaction with life. At the same time, our evidence suggests that relative wealth effect is stronger for the rich. Similarly, in villages with higher inequality, individuals report less satisfaction with life. However, when compared to the effect of absolute income, these effects (i.e. relative income and local inequality) are modest. Amongst other factors, we study the influence of institutional quality. Institutional quality, measured in terms of confidence in police, matters for well-being: it enters with a positive and significant coefficient in the well-being function.
Resumo:
References (20)Cited By (1)Export CitationAboutAbstract Proper scoring rules provide a useful means to evaluate probabilistic forecasts. Independent from scoring rules, it has been argued that reliability and resolution are desirable forecast attributes. The mathematical expectation value of the score allows for a decomposition into reliability and resolution related terms, demonstrating a relationship between scoring rules and reliability/resolution. A similar decomposition holds for the empirical (i.e. sample average) score over an archive of forecast–observation pairs. This empirical decomposition though provides a too optimistic estimate of the potential score (i.e. the optimum score which could be obtained through recalibration), showing that a forecast assessment based solely on the empirical resolution and reliability terms will be misleading. The differences between the theoretical and empirical decomposition are investigated, and specific recommendations are given how to obtain better estimators of reliability and resolution in the case of the Brier and Ignorance scoring rule.
Resumo:
The continuous ranked probability score (CRPS) is a frequently used scoring rule. In contrast with many other scoring rules, the CRPS evaluates cumulative distribution functions. An ensemble of forecasts can easily be converted into a piecewise constant cumulative distribution function with steps at the ensemble members. This renders the CRPS a convenient scoring rule for the evaluation of ‘raw’ ensembles, obviating the need for sophisticated ensemble model output statistics or dressing methods prior to evaluation. In this article, a relation between the CRPS score and the quantile score is established. The evaluation of ‘raw’ ensembles using the CRPS is discussed in this light. It is shown that latent in this evaluation is an interpretation of the ensemble as quantiles but with non-uniform levels. This needs to be taken into account if the ensemble is evaluated further, for example with rank histograms.
Resumo:
In this paper I analyze the general equilibrium in a random Walrasian economy. Dependence among agents is introduced in the form of dependency neighborhoods. Under the uncertainty, an agent may fail to survive due to a meager endowment in a particular state (direct effect), as well as due to unfavorable equilibrium price system at which the value of the endowment falls short of the minimum needed for survival (indirect terms-of-trade effect). To illustrate the main result I compute the stochastic limit of equilibrium price and probability of survival of an agent in a large Cobb-Douglas economy.
Resumo:
We develop a new sparse kernel density estimator using a forward constrained regression framework, within which the nonnegative and summing-to-unity constraints of the mixing weights can easily be satisfied. Our main contribution is to derive a recursive algorithm to select significant kernels one at time based on the minimum integrated square error (MISE) criterion for both the selection of kernels and the estimation of mixing weights. The proposed approach is simple to implement and the associated computational cost is very low. Specifically, the complexity of our algorithm is in the order of the number of training data N, which is much lower than the order of N2 offered by the best existing sparse kernel density estimators. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to those of the classical Parzen window estimate and other existing sparse kernel density estimators.
Resumo:
This paper examines the impact of the auction process of residential properties that whilst unsuccessful at auction sold subsequently. The empirical analysis considers both the probability of sale and the premium of the subsequent sale price over the guide price, reserve and opening bid. The findings highlight that the final achieved sale price is influenced by key price variables revealed both prior to and during the auction itself. Factors such as auction participation, the number of individual bidders and the number of bids are significant in a number of the alternative specifications.
Resumo:
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small-sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey-based probability forecasts.
Resumo:
A new sparse kernel density estimator is introduced. Our main contribution is to develop a recursive algorithm for the selection of significant kernels one at time using the minimum integrated square error (MISE) criterion for both kernel selection. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.
Resumo:
We report experimental results on a prisoners' dilemma implemented in a way which allows us to elicit incentive−compatible valuations of the game. We test the hypothesis that players' valuations coincide with their Nash equilibrium earnings. Our results offer significantly less support for this hypothesis than for the prediction of Dominant Strategy (DS) play.
Resumo:
We conducted 2 longitudinal meditational studies to test an integrative model of goals, stress and coping, and well‐being. Study 1 documented avoidance personal goals as an antecedent of life stressors and life stressors as a partial mediator of the relation between avoidance goals and longitudinal change in subjective well‐being (SWB). Study 2 fully replicated Study 1 and likewise validated avoidance goals as an antecedent of avoidance coping and avoidance coping as a partial mediator of the relation between avoidance goals and longitudinal change in SWB. It also showed that avoidance coping partially mediates the link between avoidance goals and life stressors and validated a sequential meditational model involving both avoidance coping and life stressors. The aforementioned results held when controlling for social desirability, basic traits, and general motivational dispositions. The findings are discussed with regard to the integration of various strands of research on self‐regulation. (PsycINFO Database Record (c) 2012 APA, all rights reserved)(journal abstract)
Resumo:
Techniques are proposed for evaluating forecast probabilities of events. The tools are especially useful when, as in the case of the Survey of Professional Forecasters (SPF) expected probability distributions of inflation, recourse cannot be made to the method of construction in the evaluation of the forecasts. The tests of efficiency and conditional efficiency are applied to the forecast probabilities of events of interest derived from the SPF distributions, and supplement a whole-density evaluation of the SPF distributions based on the probability integral transform approach.
Resumo:
We consider different methods for combining probability forecasts. In empirical exercises, the data generating process of the forecasts and the event being forecast is not known, and therefore the optimal form of combination will also be unknown. We consider the properties of various combination schemes for a number of plausible data generating processes, and indicate which types of combinations are likely to be useful. We also show that whether forecast encompassing is found to hold between two rival sets of forecasts or not may depend on the type of combination adopted. The relative performances of the different combination methods are illustrated, with an application to predicting recession probabilities using leading indicators.
Resumo:
We consider whether survey respondents’ probability distributions, reported as histograms, provide reliable and coherent point predictions, when viewed through the lens of a Bayesian learning model. We argue that a role remains for eliciting directly-reported point predictions in surveys of professional forecasters.
Resumo:
Older adults often experience memory impairments, but can sometimes use selective processing and schematic support to remember important information. The current experiments investigate to what degree younger and healthy older adults remember medication side effects that were subjectively or objectively important to remember. Participants studied a list of common side effects, and rated how negative these effects were if they were to experience them, and were then given a free recall test. In Experiment 1, the severity of the side effects ranged from mild (e.g., itching) to severe (e.g., stroke), and in Experiment 2, certain side effects were indicated as critical to remember (i.e., “contact your doctor if you experience this”). There were no age differences in terms of free recall of the side effects, and older adults remembered more severe side effects relative to mild effects. However, older adults were less likely to recognize critical side effects on a later recognition test, relative to younger adults. The findings suggest that older adults can selectively remember medication side effects, but have difficulty identifying familiar but potentially critical side effects, and this has implications for monitoring medication use in older age.