996 resultados para Measurement uncertainty
Resumo:
This paper develop and estimates a model of demand estimation for environmental public goods which allows for consumers to learn about their preferences through consumption experiences. We develop a theoretical model of Bayesian updating, perform comparative statics over the model, and show how the theoretical model can be consistently incorporated into a reduced form econometric model. We then estimate the model using data collected for two environmental goods. We find that the predictions of the theoretical exercise that additional experience makes consumers more certain over their preferences in both mean and variance are supported in each case.
Resumo:
We develop methods for Bayesian model averaging (BMA) or selection (BMS) in Panel Vector Autoregressions (PVARs). Our approach allows us to select between or average over all possible combinations of restricted PVARs where the restrictions involve interdependencies between and heterogeneities across cross-sectional units. The resulting BMA framework can find a parsimonious PVAR specification, thus dealing with overparameterization concerns. We use these methods in an application involving the euro area sovereign debt crisis and show that our methods perform better than alternatives. Our findings contradict a simple view of the sovereign debt crisis which divides the euro zone into groups of core and peripheral countries and worries about financial contagion within the latter group.
Resumo:
We develop methods for Bayesian model averaging (BMA) or selection (BMS) in Panel Vector Autoregressions (PVARs). Our approach allows us to select between or average over all possible combinations of restricted PVARs where the restrictions involve interdependencies between and heterogeneities across cross-sectional units. The resulting BMA framework can find a parsimonious PVAR specification, thus dealing with overparameterization concerns. We use these methods in an application involving the euro area sovereign debt crisis and show that our methods perform better than alternatives. Our findings contradict a simple view of the sovereign debt crisis which divides the euro zone into groups of core and peripheral countries and worries about financial contagion within the latter group.
Resumo:
This paper provides a general treatment of the implications for welfare of legal uncertainty. We distinguish legal uncertainty from decision errors: though the former can be influenced by the latter, the latter are neither necessary nor sufficient for the existence of legal uncertainty. We show that an increase in decision errors will always reduce welfare. However, for any given level of decision errors, information structures involving more legal uncertainty can improve welfare. This holds always, even when there is complete legal uncertainty, when sanctions on socially harmful actions are set at their optimal level. This transforms radically one’s perception about the “costs” of legal uncertainty. We also provide general proofs for two results, previously established under restrictive assumptions. The first is that Effects-Based enforcement procedures may welfare dominate Per Se (or object-based) procedures and will always do so when sanctions are optimally set. The second is that optimal sanctions may well be higher under enforcement procedures involving more legal uncertainty.
Resumo:
In this paper we make three contributions to the literature on optimal Competition Law enforcement procedures. The first (which is of general interest beyond competition policy) is to clarify the concept of “legal uncertainty”, relating it to ideas in the literature on Law and Economics, but formalising the concept through various information structures which specify the probability that each firm attaches – at the time it takes an action – to the possibility of its being deemed anti-competitive were it to be investigated by a Competition Authority. We show that the existence of Type I and Type II decision errors by competition authorities is neither necessary nor sufficient for the existence of legal uncertainty, and that information structures with legal uncertainty can generate higher welfare than information structures with legal certainty – a result echoing a similar finding obtained in a completely different context and under different assumptions in earlier Law and Economics literature (Kaplow and Shavell, 1992). Our second contribution is to revisit and significantly generalise the analysis in our previous paper, Katsoulacos and Ulph (2009), involving a welfare comparison of Per Se and Effects- Based legal standards. In that analysis we considered just a single information structure under an Effects-Based standard and also penalties were exogenously fixed. Here we allow for (a) different information structures under an Effects-Based standard and (b) endogenous penalties. We obtain two main results: (i) considering all information structures a Per Se standard is never better than an Effects-Based standard; (ii) optimal penalties may be higher when there is legal uncertainty than when there is no legal uncertainty.
Resumo:
This paper critically examines a number of issues relating to the measurement of tax complexity. It starts with an analysis of the concept of tax complexity, distinguishing tax design complexity and operational complexity. It considers the consequences/costs of complexity, and then examines the rationale for measuring complexity. Finally it applies the analysis to an examination of an index of complexity developed by the UK Office of Tax Simplification (OTS).
Resumo:
We analyse the role of time-variation in coefficients and other sources of uncertainty in exchange rate forecasting regressions. Our techniques incorporate the notion that the relevant set of predictors and their corresponding weights, change over time. We find that predictive models which allow for sudden rather than smooth, changes in coefficients significantly beat the random walk benchmark in out-of-sample forecasting exercise. Using innovative variance decomposition scheme, we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients' variability, as the main factors hindering models' forecasting performance. The uncertainty regarding the choice of the predictor is small.
Resumo:
This paper proposes a new class of stratification indices that measure interdistributional inequality between multiple groups. The class is based on a conceptualisation of stratification as a process that results in a hierarchical ordering of groups and therefore seeks to capture not only the extent to which groups form well-defined strata in the income distribution but also the scale of the resultant differences in income standards between them, where these two factors play the same role as identification and alienation respectively in the measurement of polarisation. The properties of the class as a whole are investigated as well as those of selected members of it: zeroth and first power indices may be interpreted as measuring the overall incidence and depth of stratification respectively, while higher power indices members are directly sensitive to the severity of stratification between groups. An illustrative application provides an empirical analysis of global income stratification by regions in 1993.
Resumo:
I put forward a concise and intuitive formula for the calculation of the valuation for a good in the presence of the expectation that further, related, goods will soon become available. This valuation is tractable in the sense that it does not require the explicit resolution of the consumerís life-time problem.
Resumo:
Using a large panel of unquoted UK over the period 2000-09, we examine the impact of firm-specific uncertainty on corporate failures. In this context we also distinguish between firms which are likely to be more or less dependant on bank finance as well as public and non-public companies. Our results document a significant effect of uncertainty on firm survival. This link is found to be more potent during the recent financial crisis compared with tranquil periods. We also uncover significant firm-level heterogeneity since the survival chance of bank-dependent and non-public firms are most affected by changes in uncertainty, especially during the recent global financial crisis.
Resumo:
The possibility of low-probability extreme natural events has reignited the debate over the optimal intensity and timing of climate policy. In this paper, we contribute to the literature by assessing the implications of low-probability extreme events on environmental policy in a continuous-time real options model with “tail risk”. In a nutshell, our results indicate the importance of tail risk and call for foresighted pre-emptive climate policies.
Resumo:
Using a large panel of unquoted UK firms over the period 2000-09, we examine the impact of firm-specific uncertainty on corporate failures. In this context we also distinguish between firms which are likely to be more or less dependent on bank finance as well as public and non-public companies. Our results document a significant effect of uncertainty on firm survival. This link is found to be more potent during the recent financial crisis compared with tranquil periods. We also uncover significant firm-level heterogeneity since the survival chances of bank-dependent and non-public firms are most affected by changes in uncertainty, especially during the recent global financial crisis.
Resumo:
This paper extends the Nelson-Siegel linear factor model by developing a flexible macro-finance framework for modeling and forecasting the term structure of US interest rates. Our approach is robust to parameter uncertainty and structural change, as we consider instabilities in parameters and volatilities, and our model averaging method allows for investors' model uncertainty over time. Our time-varying parameter Nelson-Siegel Dynamic Model Averaging (NS-DMA) predicts yields better than standard benchmarks and successfully captures plausible time-varying term premia in real time. The proposed model has significant in-sample and out-of-sample predictability for excess bond returns, and the predictability is of economic value.
Resumo:
Bayesian model averaging (BMA) methods are regularly used to deal with model uncertainty in regression models. This paper shows how to introduce Bayesian model averaging methods in quantile regressions, and allow for different predictors to affect different quantiles of the dependent variable. I show that quantile regression BMA methods can help reduce uncertainty regarding outcomes of future inflation by providing superior predictive densities compared to mean regression models with and without BMA.