101 resultados para Andrews
em Scottish Institute for Research in Economics (SIRE) (SIRE), United Kingdom
Resumo:
Expectations about the future are central for determination of current macroeconomic outcomes and the formulation of monetary policy. Recent literature has explored ways for supplementing the benchmark of rational expectations with explicit models of expectations formation that rely on econometric learning. Some apparently natural policy rules turn out to imply expectational instability of private agents’ learning. We use the standard New Keynesian model to illustrate this problem and survey the key results about interest-rate rules that deliver both uniqueness and stability of equilibrium under econometric learning. We then consider some practical concerns such as measurement errors in private expectations, observability of variables and learning of structural parameters required for policy. We also discuss some recent applications including policy design under perpetual learning, estimated models with learning, recurrent hyperinflations, and macroeconomic policy to combat liquidity traps and deflation.
Resumo:
This paper provides a simple theoretical framework to discuss the relationship between assisted reproductive technologies and the microeconomics of fertility choice. Individuals make choices of education and work along with decisions about whether and when to have children. Decisions regarding fertility are influenced by policy and labor market factors that affect the earnings opportunities of mothers and the costs of raising children. We show how observed differences in these economic factors across countries explain observed different fertility and childbearing age patterns. We then use the model to predict behavioral responses to biomedical improvements in assisted reproductive technologies, and hence the impact of these technologies on fertility.
Resumo:
The paper uses a range of primary-source empirical evidence to address the question: ‘why is it to hard to value intangible assets?’ The setting is venture capital investment in high technology companies. While the investors are risk specialists and financial experts, the entrepreneurs are more knowledgeable about product innovation. Thus the context lends itself to analysis within a principal-agent framework, in which information asymmetry may give rise to adverse selection, pre-contract, and moral hazard, post-contract. We examine how the investor might attenuate such problems and attach a value to such high-tech investments in what are often merely intangible assets, through expert due diligence, monitoring and control. Qualitative evidence is used to qualify the more clear cut picture provided by a principal-agent approach to a more mixed picture in which the ‘art and science’ of investment appraisal are utilised by both parties alike
Resumo:
What is the seigniorage-maximizing level of inflation? Four models formulae for the seigniorage maximizing inflation rate (SMIR) are compared. Two sticky-price models arrive at very different quantitative recommendations although both predict somewhat lower SMIRs than Cagan’s formula and a variant of a .ex-price model due to Kimbrough (2006). The models differ markedly in how inflation distorts the labour market: The Calvo model implies that inflation and output are negatively related and that output is falling in price stickiness whilst the Rotemberg cost-of-price-adjustment model implies exactly the opposite. Interestingly, if our version of the Calvo model is to be believed, the level of inflation experienced recently in advanced economies such as the USA and the UK may be quite close to the SMIR.
Resumo:
This paper reports on: (a) new primary source evidence on; and (b) statistical and econometric analysis of high technology clusters in Scotland. It focuses on the following sectors: software, life sciences, microelectronics, optoelectronics, and digital media. Evidence on a postal and e-mailed questionnaire is presented and discussed under the headings of: performance, resources, collaboration & cooperation, embeddedness, and innovation. The sampled firms are characterised as being small (viz. micro-firms and SMEs), knowledge intensive (largely graduate staff), research intensive (mean spend on R&D GBP 842k), and internationalised (mainly selling to markets beyond Europe). Preliminary statistical evidence is presented on Gibrat’s Law (independence of growth and size) and the Schumpeterian Hypothesis (scale economies in R&D). Estimates suggest a short-run equilibrium size of just 100 employees, but a long-run equilibrium size of 1000 employees. Further, to achieve the Schumpeterian effect (of marked scale economies in R&D), estimates suggest that firms have to grow to very much larger sizes of beyond 3,000 employees. We argue that the principal way of achieving the latter scale may need to be by takeovers and mergers, rather than by internally driven growth.
Resumo:
We show that a flex-price two-sector open economy DSGE model can explain the poor degree of international risk sharing and exchange rate disconnect. We use a suite of model evaluation measures and examine the role of (i) traded and non-traded sectors; (ii) financial market incompleteness; (iii) preference shocks; (iv) deviations from UIP condition for the exchange rates; and (v) creditor status in net foreign assets. We find that there is a good case for both traded and non-traded productivity shocks as well as UIP deviations in explaining the puzzles.
Resumo:
Employing the financial accelerator (FA) model of Bernanke, Gertler and Gilchrist (1999) enhanced to include a shock to the FA mechanism, we construct and study shocks to the efficiency of the financial sector in post-war US business cycles. We find that financial shocks are very tightly linked with the onset of recessions, more so than TFP or monetary shocks. The financial shock invariably remains contractionary for sometime after recessions have ended. The shock accounts for a large part of the variance of GDP and is strongly negatively correlated with the external finance premium. Second-moments comparisons across variants of the model with and without a (stochastic) FA mechanism suggests the stochastic FA model helps us understand the data.
Resumo:
This paper does two things. First, it presents alternative approaches to the standard methods of estimating productive efficiency using a production function. It favours a parametric approach (viz. the stochastic production frontier approach) over a nonparametric approach (e.g. data envelopment analysis); and, further, one that provides a statistical explanation of efficiency, as well as an estimate of its magnitude. Second, it illustrates the favoured approach (i.e. the ‘single stage procedure’) with estimates of two models of explained inefficiency, using data from the Thai manufacturing sector, after the crisis of 1997. Technical efficiency is modelled as being dependent on capital investment in three major areas (viz. land, machinery and office appliances) where land is intended to proxy the effects of unproductive, speculative capital investment; and both machinery and office appliances are intended to proxy the effects of productive, non-speculative capital investment. The estimates from these models cast new light on the five-year long, post-1997 crisis period in Thailand, suggesting a structural shift from relatively labour intensive to relatively capital intensive production in manufactures from 1998 to 2002.
Resumo:
We study how the use of judgement or “add-factors” in forecasting may disturb the set of equilibrium outcomes when agents learn using recursive methods. We isolate conditions under which new phenomena, which we call exuberance equilibria, can exist in a standard self-referential environment. Local indeterminacy is not a requirement for existence. We construct a simple asset pricing example and find that exuberance equilibria, when they exist, can be extremely volatile relative to fundamental equilibria.
Resumo:
Abstract: Should two–band income taxes be progressive given a general income distribution? We provide a negative answer under utilitarian and max-min welfare functions. While this result clarifies some ambiguities in the literature, it does not rule out progressive taxes in general. If we maximize total or weighted utility of the poor, as often intended by the society, progressive taxes can be justified, especially when the ‘rich’ are very rich. Under these objectives we obtain new necessary conditions for progressive taxes, which only depend on aggregate features of income distributions. The validity of these conditions is examined using plausible income distributions.
Resumo:
We develop tests of the proportional hazards assumption, with respect to a continuous covariate, in the presence of unobserved heterogeneity with unknown distribution at the individual observation level. The proposed tests are specially powerful against ordered alternatives useful for modeling non-proportional hazards situations. By contrast to the case when the heterogeneity distribution is known up to …nite dimensional parameters, the null hypothesis for the current problem is similar to a test for absence of covariate dependence. However, the two testing problems di¤er in the nature of relevant alternative hypotheses. We develop tests for both the problems against ordered alternatives. Small sample performance and an application to real data highlight the usefulness of the framework and methodology.
Resumo:
This paper develops a general theoretical framework within which a heterogeneous group taxpayers confront a market that supplies a variety of schemes for reducing tax liability, and uses this framework to explore the impact of a wide range of anti-avoidance policies. Schemes differ in their legal effectiveness and hence in the risks to which they expose taxpayers - risks which go beyond the risk of audit considered in the conventional literature on evasion. Given the individual taxpayer’s circumstances, the prices charged for the schemes and the policy environment, the model predicts (i) whether or not any given taxpayer will acquire a scheme, and (ii) if they do so, which type of scheme they will acquire. The paper then analyses how these decisions, and hence the tax gap, are influenced by four generic types of policy: Disclosure – earlier information leading to faster closure of loopholes; Penalties – introduction of penalties for failed avoidance; Policy Design – fundamental policy changes that design out opportunities for avoidance; Product Register - the introduction of GAARs or mini-GAARs that give greater clarity about how different types of scheme will be treated. The paper shows that when considering the indirect/behavioural effects of policies on the tax gap it is important to recognise that these operate on two different margins. First policies will have deterrence effects – their impact on the quantum of taxpayers choosing to acquire different types schemes as distinct to acquiring no scheme at all. There will be a range of such deterrence effects reflecting the range of schemes available in the market. But secondly, since different schemes generate different tax gaps, policies will also have switching effects as they induce taxpayers who previously acquired one type of scheme to acquire another. The first three types of policy generate positive deterrence effects but differ in the switching effects they produce. The fourth type of policy produces mixed deterrence effects.
Resumo:
This paper examines the rise in European unemployment since the 1970s by introducing endogenous growth into an otherwise standard New Keynesian model with capital accumulation and unemployment. We subject the model to an uncorrelated cost push shock, in order to mimic a scenario akin to the one faced by central banks at the end of the 1970s. Monetary policy implements a disinfl ation by following an interest feedback rule calibrated to an estimate of a Bundesbank reaction function. 40 quarters after the shock has vanished, unemployment is still about 1.8 percentage points above its steady state. Our model also broadly reproduces cross country differences in unemployment by drawing on cross country differences in the size of cost push shock and the associated disinfl ation, the monetary policy reaction function and the wage setting structure.
Resumo:
This paper has three contributions. First, it shows how field work within small firms in PR Chinese has provided new evidence which enables us to measure and calibrate Entrepreneurial Orientation (EO), as ‘spirit’, and Intangible Assets (IA), as ‘material’, for use in models of small firm growth. Second, it uses inter-item correlation analysis and both exploratory and confirmatory factor analysis to provide new measures of EO and IA, in index and in vector form, for use in econometric models of firm growth. Third, it estimates two new econometric models of small firm employment growth in PR China, under the null hypothesis of Gibrat’s Law, using our two new index-based and vector-based measures of EO and IA. Estimation is by OLS with adjustment for heteroscedasticity, and for sample selectivity. Broadly, it finds that EO attributes have had little significant impact on small firm growth, and indeed innovativeness and pro-activity paradoxically may even dampen growth. However, IA attributes have had a positive and significant impact on growth, with networking, and technological knowledge being of prime importance, and intellectual property and human capital being of lesser but still significant importance. In the light of these results, Gibrat’s Law is generalized, and Jovanovic’s learning theory is extended, to emphasise the importance of IA to growth. These findings cast new empirical light on the oft-quoted national slogan in PR China of “spirit and material”. So far as small firms are concerned, this paper suggests that their contribution to PR China’s remarkable economic growth is not so much attributable to the ‘spirit’ of enterprise (as suggested by propaganda) as, more prosaically, to the pursuit of the ‘material’.
Resumo:
Until recently, much effort has been devoted to the estimation of panel data regression models without adequate attention being paid to the drivers of diffusion and interaction across cross section and spatial units. We discuss some new methodologies in this emerging area and demonstrate their use in measurement and inferences on cross section and spatial interactions. Specifically, we highlight the important distinction between spatial dependence driven by unobserved common factors and those based on a spatial weights matrix. We argue that, purely factor driven models of spatial dependence may be somewhat inadequate because of their connection with the exchangeability assumption. Limitations and potential enhancements of the existing methods are discussed, and several directions for new research are highlighted.