64 resultados para GDP elasticity
em CentAUR: Central Archive University of Reading - UK
Resumo:
The peak congestion of the European grid may create significant impacts on system costs because of the need for higher marginal cost generation, higher cost system balancing and increasing grid reinforcement investment. The use of time of use rates, incentives, real time pricing and other programmes, usually defined as Demand Side Management (DSM), could bring about significant reductions in prices, limit carbon emissions from dirty power plants, and improve the integration of renewable sources of energy. Unlike previous studies on elasticity of residential electricity demand under flat tariffs, the aim of this study is not to investigate the known relatively inelastic relationship between demand and prices. Rather, the aim is to assess how occupancy levels vary in different European countries. This reflects the reality of demand loads, which are predominantly determined by the timing of human activities (e.g. travelling to work, taking children to school) rather than prices. To this end, two types of occupancy elasticity are estimated: baseline occupancy elasticity and peak occupancy elasticity. These represent the intrinsic elasticity associated with human activities of single residential end-users in 15 European countries. This study makes use of occupancy time-series data from the Harmonised European Time Use Survey database to build European occupancy curves; identify peak occupancy periods; draw time use demand curves for video and TV watching activity; and estimate national occupancy elasticity levels of single-occupant households. Findings on occupancy elasticities provide an indication of possible DSM strategies based on occupancy levels and not prices.
Resumo:
We consider the approximation of solutions of the time-harmonic linear elastic wave equation by linear combinations of plane waves. We prove algebraic orders of convergence both with respect to the dimension of the approximating space and to the diameter of the domain. The error is measured in Sobolev norms and the constants in the estimates explicitly depend on the problem wavenumber. The obtained estimates can be used in the h- and p-convergence analysis of wave-based finite element schemes.
Resumo:
The effects of data uncertainty on real-time decision-making can be reduced by predicting early revisions to US GDP growth. We show that survey forecasts efficiently anticipate the first-revised estimate of GDP, but that forecasting models incorporating monthly economic indicators and daily equity returns provide superior forecasts of the second-revised estimate. We consider the implications of these findings for analyses of the impact of surprises in GDP revision announcements on equity markets, and for analyses of the impact of anticipated future revisions on announcement-day returns.
Resumo:
Balkanisation is a way to describe the breakdown of cross-border banking, as nervous lenders retreat in particular from the more troubled parts of the Eurozone or at least try to isolate operations within national boundaries. It is increasing at the Bank level, however the senior policy makers consider this a negative trend – Mario Draghi, president of the European Central Bank, has talked of the need to “repair this financial fragmentation” and Mark Carney, head of global regulatory body the Financial Stability Board, [and now Governor of the Bank of England] has warned that deglobalising finance will hurt growth and jobs by “reducing financial capacity and systemic resilience”. In this article I would like to examine the impact of banking balkanisation on international trade and provide some initial thoughts about remedies for excessive risk in a banking non-balkanising world.
Resumo:
This paper describes the results of research intended to explore the volatility inherent in the United Nations Development Programme's (UNDP) Human Development Index (HDI). The HDI is intended to be a simple and transparent device for comparing progress in human development, and is an aggregate of life expectancy, education and GDP per capita. Values of the HDI for each country are presented in the Human Development Reports (HDRs), the first being published in 1990. However, while the methodology is consistent for all countries in each year there are notable differences between years that make temporal comparisons of progress difficult. The paper presents the results of recalculating the HDI for a simplified sample of 114 countries using various methodologies employed by the UNDP. The results are a set of deviations of recalculated HDI ranks compared to the original ranks given in the HDRs. The volatility that can result from such recalculation is shown to be substantial (+/-10-15 ranks), yet reports in the popular press are frequently sensitive to movements of only a few ranks. Such movement can easily be accounted for by changes in the HDI methodology rather than genuine progress in human development. While the HDRs often carry warnings about the inadvisability of such year-on-year comparisons, it is argued that the existence of such a high-profile index and the overt presentation within league tables do encourage such comparison. Assuming that the HDI will be retained as a focal point within the HDRs, then it is suggested that greater focus be upon more meaningful and robust categories of human development (e.g. low, medium and high) rather than league tables where shifts of a few places, perhaps as a result of nothing more than a methodological or data artefact, may be highlighted in the press and by policy makers. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Indicators are commonly recommended as tools for assessing the attainment of development, and the current vogue is for aggregating a number of indicators together into a single index. It is claimed that such indices of development help facilitate maximum impact in policy terms by appealing to those who may not necessarily have technical expertise in data collection, analysis and interpretation. In order to help counter criticisms of over-simplification, those advocating such indices also suggest that the raw data be provided so as to allow disaggregation into component parts and hence facilitate a more subtle interpretation if a reader so desires. This paper examines the problems involved with interpreting indices of development by focusing on the United Nations Development Programmes (UNDP) Human Development Index (HDI) published each year in the Human Development Reports (HDRs). The HDI was intended to provide an alternative to the more economic based indices, such as GDP, commonly used within neo-liberal development agendas. The paper explores the use of the HDI as a gauge of human development by making comparisons between two major political and economic communities in Africa (ECOWAS and SADC). While the HDI did help highlight important changes in human development as expressed by the HDI over 10 years, it is concluded that the HDI and its components are difficult to interpret as methodologies have changed significantly and the 'averaging' nature of the HDI could hide information unless care is taken. The paper discusses the applicability of alternative models to the HDI such as the more neo-populist centred methods commonly advocated for indicators of sustainable development. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This paper discusses the dangers inherent in allempting to simplify something as complex as development. It does this by exploring the Lynn and Vanhanen theory of deterministic development which asserts that varying levels of economic development seen between countries can be explained by differences in 'national intelligence' (national IQ). Assuming that intelligence is genetically determined, and as different races have been shown to have different IQ, then they argue that economic development (measured as GDP/capita) is largely a function of race and interventions to address imbalances can only have a limited impact. The paper presents the Lynne and Vanhanen case and critically discusses the data and analyses (linear regression) upon which it is based. It also extends the cause-effect basis of Lynne and Vanhanen's theory for economic development into human development by using the Human Development Index (HDI). It is argued that while there is nothing mathematically incorrect with their calculations, there are concerns over the data they employ. Even more fundamentally it is argued that statistically significant correlations between the various components of the HDI and national IQ can occur via a host of cause-effect pathways, and hence the genetic determinism theory is far from proven. The paper ends by discussing the dangers involved in the use of over-simplistic measures of development as a means of exploring cause-effect relationships. While the creators of development indices such as the HDI have good intentions, simplistic indices can encourage simplistic explanations of under-development. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Development geography has long sought to understand why inequalities exist and the best ways to address them. Dependency theory sets out an historical rationale for under development based on colonialism and a legacy of developed core and under-developed periphery. Race is relevant in this theory only insofar that Europeans are white and the places they colonised were occupied by people with darker skin colour. There are no innate biological reasons why it happened in that order. However, a new theory for national inequalities proposed by Lynn and Vanhanen in a series of publications makes the case that poorer countries have that status because of a poorer genetic stock rather than an accident of history. They argue that IQ has a genetic basis and IQ is linked to ability. Thus races with a poorer IQ have less ability, and thus national IQ can be positively correlated with performance as measured by an indicator like GDP/capita. Their thesis is one of despair, as little can be done to improve genetic stock significantly other than a programme of eugenics. This paper summarises and critiques the Lynn and Vanhanen hypothesis and the assumptions upon which it is based, and uses this analysis to show how a human desire to simplify in order to manage can be dangerous in development geography. While the attention may naturally be focused on the 'national IQ' variables as a proxy measure of 'innate ability', the assumption of GDP per capita as an indicator of 'success' and 'achievement' is far more readily accepted without criticism. The paper makes the case that the current vogue for indicators, indices and cause-effect can be tyrannical.
Resumo:
This paper deconstructs the relationship between the Environmental Sustainability Index (ESI) and national income. The ESI attempts to provide a single figure which encapsulates environmental sustainability' for each country included in the analysis, and this allied with a 'league table' format so as to name and shame bad performers, has resulted in widespread reporting within the popular presses of a number of countries. In essence, the higher the value of the ESI then the more 'environmentally sustainable' a country is deemed to be. A logical progression beyond the use of the ESI to publicise environmental sustainability is its use within a more analytical context. Thus an index designed to simplify in order to have an impact on policy is used to try and understand causes of good and bad performance in environmental sustainability. For example the creators of the ESI claim that ESI is related to GDP/capita (adjusted for Purchasing Power Parity) such that the ESI increases linearly with wealth. While this may in a sense be a comforting picture, do the variables within the ESI allow for alternatives to the story, and if they do then what are the repercussions for those producing such indices for broad consumption amongst the policy makers, mangers, the press, etc.? The latter point is especially important given the appetite for such indices amongst non-specialists, and for all their weaknesses the ESI and other such aggregated indices will not go away. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The development of genetically modified (GM) crops has led the European Union (EU) to put forward the concept of 'coexistence' to give fanners the freedom to plant both conventional and GM varieties. Should a premium for non-GM varieties emerge in the market, 'contamination' by GM pollen would generate a negative externality to conventional growers. It is therefore important to assess the effect of different 'policy variables'on the magnitude of the externality to identify suitable policies to manage coexistence. In this paper, taking GM herbicide tolerant oilseed rape as a model crop, we start from the model developed in Ceddia et al. [Ceddia, M.G., Bartlett, M., Perrings, C., 2007. Landscape gene flow, coexistence and threshold effect: the case of genetically modified herbicide tolerant oilseed rape (Brassica napus). Ecol. Modell. 205, pp. 169-180] use a Monte Carlo experiment to generate data and then estimate the effect of the number of GM and conventional fields, width of buffer areas and the degree of spatial aggregation (i.e. the 'policy variables') on the magnitude of the externality at the landscape level. To represent realistic conditions in agricultural production, we assume that detection of GM material in conventional produce might occur at the field level (no grain mixing occurs) or at the silos level (where grain mixing from different fields in the landscape occurs). In the former case, the magnitude of the externality will depend on the number of conventional fields with average transgenic presence above a certain threshold. In the latter case, the magnitude of the externality will depend on whether the average transgenic presence across all conventional fields exceeds the threshold. In order to quantify the effect of the relevant' policy variables', we compute the marginal effects and the elasticities. Our results show that when relying on marginal effects to assess the impact of the different 'policy variables', spatial aggregation is far more important when transgenic material is detected at field level, corroborating previous research. However, when elasticity is used, the effectiveness of spatial aggregation in reducing the externality is almost identical whether detection occurs at field level or at silos level. Our results show also that the area planted with GM is the most important 'policy variable' in affecting the externality to conventional growers and that buffer areas on conventional fields are more effective than those on GM fields. The implications of the results for the coexistence policies in the EU are discussed. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
A two-sector Ramsey-type model of growth is developed to investigate the relationship between agricultural productivity and economy-wide growth. The framework takes into account the peculiarities of agriculture both in production ( reliance on a fixed natural resource base) and in consumption (life-sustaining role and low income elasticity of food demand). The transitional dynamics of the model establish that when preferences respect Engel's law, the level and growth rate of agricultural productivity influence the speed of capital accumulation. A calibration exercise shows that a small difference in agricultural productivity has drastic implications for the rate and pattern of growth of the economy. Hence, low agricultural productivity can form a bottleneck limiting growth, because high food prices result in a low saving rate.
Resumo:
To improve the welfare of the rural poor and keep them in the countryside, the government of Botswana has been spending 40% of the value of agricultural GDP on agricultural support services. But can investment make smallholder agriculture prosperous in such adverse conditions? This paper derives an answer by applying a two-output six-input stochastic translog distance function, with inefficiency effects and biased technical change to panel data for the 18 districts and the commercial agricultural sector, from 1979 to 1996 This model demonstrates that herds are the most important input, followed by draft power. land and seeds. Multilateral indices for technical change, technical efficiency and total factor productivity (TFP) show that the technology level of the commercial agricultural sector is more than six times that of traditional agriculture and that the gap has been increasing, due to technological regression in traditional agriculture and modest progress in commercial agriculture. Since the levels of efficiency are similar, the same patient is repeated by the TFP indices. This result highlights the policy dilemma of the trade-off between efficiency and equity objectives.
Resumo:
Capturing the pattern of structural change is a relevant task in applied demand analysis, as consumer preferences may vary significantly over time. Filtering and smoothing techniques have recently played an increasingly relevant role. A dynamic Almost Ideal Demand System with random walk parameters is estimated in order to detect modifications in consumer habits and preferences, as well as changes in the behavioural response to prices and income. Systemwise estimation, consistent with the underlying constraints from economic theory, is achieved through the EM algorithm. The proposed model is applied to UK aggregate consumption of alcohol and tobacco, using quarterly data from 1963 to 2003. Increased alcohol consumption is explained by a preference shift, addictive behaviour and a lower price elasticity. The dynamic and time-varying specification is consistent with the theoretical requirements imposed at each sample point. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Diebold and Lamb (1997) argue that since the long-run elasticity of supply derived from the Nerlovian model entails a ratio of random variables, it is without moments. They propose minimum expected loss estimation to correct this problem but in so-doing ignore the fact that a non white-noise-error is implicit in the model. We show that, as a consequence the estimator is biased and demonstrate that Bayesian estimation which fully accounts for the error structure is preferable.