50 resultados para Not-working time
Exclusive Nightclubs and Lonely Hearts Columns: Nonmonotone Participation in Optional Intermediation
Resumo:
In many decentralised markets, the traders who benefit most from an exchange do not employ intermediaries even though they could easily afford them. At the same time, employing intermediaries is not worthwhile for traders who benefit little from trade. Together, these decisions amount to non-monotone participation choices in intermediation: only traders of middle “type” employ intermediaries, while the rest, the high and the low types, prefer to search for a trading partner directly. We provide a theoretical foundation for this, hitherto unexplained, phenomenon. We build a dynamic matching model, where a trader’s equilibrium bargaining share is a convex increasing function of her type. We also show that this is indeed a necessary condition for the existence of non-monotone equilibria.
Resumo:
In this paper we show that the inclusion of unemployment-tenure interaction variates in Mincer wage equations is subject to serious pitfalls. These variates were designed to test whether or not the sensitivity to the business cycle of a worker’s wage varies according to her tenure. We show that three canonical variates used in the literature - the minimum unemployment rate during a worker’s time at the firm(min u), the unemployment rate at the start of her tenure(Su) and the current unemployment rate interacted with a new hire dummy(δu) - can all be significant and "correctly" signed even when each worker in the firm receives the same wage, regardless of tenure (equal treatment). In matched data the problem can be resolved by the inclusion in the panel of firm-year interaction dummies. In unmatched data where this is not possible, we propose a solution for min u and Su based on Solon, Barsky and Parker’s(1994) two step method. This method is sub-optimal because it ignores a large amount of cross tenure variation in average wages and is only valid when the scaled covariances of firm wages and firm employment are acyclical. Unfortunately δu cannot be identified in unmatched data because a differential wage response to unemployment of new hires and incumbents will appear under both equal treatment and unequal treatment.
Resumo:
We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coe¢ cients to change over time, but also allow for the entire forecasting model to change over time. We nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coe¢ cient models. We also provide evidence on which sets of predictors are relevant for forecasting in each period.
Resumo:
We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coe¢ cients to change over time, but also allow for the entire forecasting model to change over time. We nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coe¢ cient models. We also provide evidence on which sets of predictors are relevant for forecasting in each period.
Resumo:
We propose a non-equidistant Q rate matrix formula and an adaptive numerical algorithm for a continuous time Markov chain to approximate jump-diffusions with affine or non-affine functional specifications. Our approach also accommodates state-dependent jump intensity and jump distribution, a flexibility that is very hard to achieve with other numerical methods. The Kolmogorov-Smirnov test shows that the proposed Markov chain transition density converges to the one given by the likelihood expansion formula as in Ait-Sahalia (2008). We provide numerical examples for European stock option pricing in Black and Scholes (1973), Merton (1976) and Kou (2002).
Resumo:
I prove that as long as we allow the marginal utility for money (lambda) to vary between purchases (similarly to the budget) then the quasi-linear and the ordinal budget-constrained models rationalize the same data. However, we know that lambda is approximately constant. I provide a simple constructive proof for the necessary and sufficient condition for the constant lambda rationalization, which I argue should replace the Generalized Axiom of Revealed Preference in empirical studies of consumer behavior. 'Go Cardinals!' It is the minimal requirement of any scientifi c theory that it is consistent with the data it is trying to explain. In the case of (Hicksian) consumer theory it was revealed preference -introduced by Samuelson (1938,1948) - that provided an empirical test to satisfy this need. At that time most of economic reasoning was done in terms of a competitive general equilibrium, a concept abstract enough so that it can be built on the ordinal preferences over baskets of goods - even if the extremely specialized ones of Arrow and Debreu. However, starting in the sixties, economics has moved beyond the 'invisible hand' explanation of how -even competitive- markets operate. A seemingly unavoidable step of this 'revolution' was that ever since, most economic research has been carried out in a partial equilibrium context. Now, the partial equilibrium approach does not mean that the rest of the markets are ignored, rather that they are held constant. In other words, there is a special commodity -call it money - that reflects the trade-offs of moving purchasing power across markets. As a result, the basic building block of consumer behavior in partial equilibrium is no longer the consumer's preferences over goods, rather her valuation of them, in terms of money. This new paradigm necessitates a new theory of revealed preference.
Resumo:
Although it might have been expected that, by this point in time, the unacceptability of the marginal productivity theory of the return on capital would be universally agreed, that is evidently not the case. Popular textbooks still propound the dogma to the innocent. This note is presented in the hope that a succinct indication of the origins of the theory it will contribute to a more general appreciation of the unrealistic and illogical nature of this doctrine.
Resumo:
In this paper we develop methods for estimation and forecasting in large timevarying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints with likelihood-based estimation of large systems, we rely on Kalman filter estimation with forgetting factors. We also draw on ideas from the dynamic model averaging literature and extend the TVP-VAR so that its dimension can change over time. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output, and interest rates demonstrates the feasibility and usefulness of our approach.
Resumo:
The quintessence of recent natural science studies is that the 2 degrees C target can only be achieved with massive emission reductions in the next few years. The central twist of this paper is the addition of this limited time to act into a non-perpetual real options framework analysing optimal climate policy under uncertainty. The window-of-opportunity modelling setup shows that the limited time to act may spark a trend reversal in the direction of low-carbon alternatives. However, the implementation of a climate policy is evaded by high uncertainty about possible climate pathways.
Resumo:
The eight years from 2000 to 2008 saw a rapid growth in the use of securitization by UK banks. We aim to identify the reasons that contributed to this rapid growth. The time period (2000 to 2010) covered by our study is noteworthy as it covers the pre- financial crisis credit-boom, the peak of the fi nancial crisis and its aftermath. In the wake of the financial crisis, many governments, regulators and political commentators have pointed an accusing finger at the securitization market - even in the absence of a detailed statistical and economic analysis. We contribute to the extant literature by performing such an analysis on UK banks, focussing principally on whether it is the need for liquidity (i.e. the funding of their balance sheets), or the desire to engage in regulatory capital arbitrage or the need for credit risk transfer that has led to UK banks securitizing their assets. We show that securitization has been signi ficantly driven by liquidity reasons. In addition, we observe a positive link between securitization and banks credit risk. We interpret these latter findings as evidence that UK banks which engaged in securitization did so, in part, to transfer credit risk and that, in comparison to UK banks which did not use securitization, they had more credit risk to transfer in the sense that they originated lower quality loans and held lower quality assets. We show that banks which issued more asset-backed securities before the financial crisis suffered more defaults after the financial crisis.
Resumo:
Game theorists typically assume that changing a game’s payoff levels—by adding the same constant to, or subtracting it from, all payoffs—should not affect behavior. While this invariance is an implication of the theory when payoffs mirror expected utilities, it is an empirical question when the “payoffs” are actually money amounts. In particular, if individuals treat monetary gains and losses differently, then payoff–level changes may matter when they result in positive payoffs becoming negative, or vice versa. We report the results of a human–subjects experiment designed to test for two types of loss avoidance: certain–loss avoidance (avoiding a strategy leading to a sure loss, in favor of an alternative that might lead to a gain) and possible–loss avoidance (avoiding a strategy leading to a possible loss, in favor of an alternative that leads to a sure gain). Subjects in the experiment play three versions of Stag Hunt, which are identical up to the level of payoffs, under a variety of treatments. We find differences in behavior across the three versions of Stag Hunt; these differences are hard to detect in the first round of play, but grow over time. When significant, the differences we find are in the direction predicted by certain– and possible–loss avoidance. Our results carry implications for games with multiple equilibria, and for theories that attempt to select among equilibria in such games.
Resumo:
This paper attempts to estimate the impact of population ageing on house prices. There is considerable debate about whether population ageing puts downwards or upwards pressure on house prices. The empirical approach differs from earlier studies of this relationship, which are mainly regression analyses of macro time-series data. A micro-simulation methodology is adopted that combines a macro-level house price model with a micro-level household formation model. The case study is Scotland, a country that is expected to age rapidly in the future. The parameters of the household formation model are estimated with panel data from the British Household Panel Survey covering the period 1999-2008. The estimates are then used to carry out a set of simulations. The simulations are based on a set of population projections that represent a considerable range in the rate of population ageing. The main finding from the simulations is that population ageing—or more generally changes in age structure—is not likely a main determinant of house prices, at least in Scotland.
Resumo:
The monetary policy reaction function of the Bank of England is estimated by the standard GMM approach and the ex-ante forecast method developed by Goodhart (2005), with particular attention to the horizons for inflation and output at which each approach gives the best fit. The horizons for the ex-ante approach are much closer to what is implied by the Bank’s view of the transmission mechanism, while the GMM approach produces an implausibly slow adjustment of the interest rate, and suffers from a weak instruments problem. These findings suggest a strong preference for the ex-ante approach.
Resumo:
VAR methods have been used to model the inter-relationships between inflows and outfl ows into unemployment and vacancies using tools such as impulse response analysis. In order to investigate whether such impulse responses change over the course of the business cycle or or over time, this paper uses TVP-VARs for US and Canadian data. For the US, we find interesting differences between the most recent recession and earlier recessions and expansions. In particular, we find the immediate effect of a negative shock on both in ow and out flow hazards to be larger in 2008 than in earlier times. Furthermore, the effect of this shock takes longer to decay. For Canada, we fi nd less evidence of time-variation in impulse responses.
Resumo:
The measurement of inter-connectedness in an economy using input-output tables is not new, however much of the previous literature has not had any explicit dynamic dimension. Studies have tried to estimate the degree of inter-relatedness for an economy at a given point in time using one input-output table, some have compared different economies at a point in time but few have looked at the question of how interconnectedness within an economy changes over time. The publication in 2010 of a consistent series of input-output tables for Scotland offers the researcher the opportunity to track changes in the degree of inter-connectedness over the seven year period 1998 to 2007. The paper is in two parts. A simple measure of inter-connectedness is introduced in the first part of the paper and applied to the Scottish tables. In the second part of the paper an extraction method is applied to sector by sector to the tables in order to estimate how interconnectedness has changed over time for each industrial sector.