916 resultados para Air exchange rate


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The conversion coefficients from air kerma to ICRU operational dose equivalent quantities for ENEA’s realization of the X-radiation qualities L10-L35 of the ISO “Low Air Kerma rate” series (L), N10-N40 of the ISO “Narrow spectrum” series (N) and H10-H60 of the ISO “High Air-kerma rate” (H) series and two beams at 5 kV and 7.5 kV were determined by utilising X-ray spectrum measurements. The pulse-height spectra were measured using a planar high-purity germanium spectrometer (HPGe) and unfolded to fluence spectra using a stripping procedure then validate with using Monte Carlo generated data of the spectrometer response. HPGe portable detector has a diameter of 8.5 mm and a thickness of 5 mm. The entrance window of the crystal is collimated by a 0.5 mm thick Aluminum ring to an open diameter of 6.5 mm. The crystal is mounted at a distance of 5 mm from the Berillium window (thickness 25.4 micron). The Monte Carlo method (MCNP-4C) was used to calculate the efficiency, escape and Compton curves of a planar high-purity germanium detector (HPGe) in the 5-60 keV energy. These curves were used for the determination of photon spectra produced by the X-ray machine SEIFERT ISOVOLT 160 kV in order to allow a precise characterization of photon beams in the low energy range, according to the ISO 4037. The detector was modelled with the MCNP computer code and validated with experimental data. To verify the measuring and the stripping procedure, the first and the second half-value layers and the air kerma rate were calculated from the counts spectra and compared with the values measured using an a free-air ionization chamber. For each radiation quality, the spectrum was characterized by the parameters given in ISO 4037-1. The conversion coefficients from the air kerma to the ICRU operational quantities Hp(10), Hp(0.07), H’(0.07) and H*(10) were calculated using monoenergetic conversion coefficients. The results are discussed with respect to ISO 4037-4, and compared with published results for low-energy X-ray spectra. The main motivation for this work was the lack of a treatment of the low photon energy region (from a few keV up to about 60 keV).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis focuses on two aspects of European economic integration: exchange rate stabilization between non-euro Countries and the Euro Area, and real and nominal convergence of Central and Eastern European Countries. Each Chapter covers these aspects from both a theoretical and empirical perspective. Chapter 1 investigates whether the introduction of the euro was accompanied by a shift in the de facto exchange rate policy of European countries outside the euro area, using methods recently developed by the literature to detect "Fear of Floating" episodes. I find that European Inflation Targeters have tried to stabilize the euro exchange rate, after its introduction; fixed exchange rate arrangements, instead, apart from official policy changes, remained stable. Finally, the euro seems to have gained a relevant role as a reference currency even outside Europe. Chapter 2 proposes an approach to estimate Central Bank preferences starting from the Central Bank's optimization problem within a small open economy, using Sweden as a case study, to find whether stabilization of the exchange rate played a role in the Monetary Policy rule of the Riksbank. The results show that it did not influence interest rate setting; exchange rate stabilization probably occurred as a result of increased economic integration and business cycle convergence. Chapter 3 studies the interactions between wages in the public sector, the traded private sector and the closed sector in ten EU Transition Countries. The theoretical literature on wage spillovers suggests that the traded sector should be the leader in wage setting, with non-traded sectors wages adjusting. We show that large heterogeneity across countries is present, and sheltered and public sector wages are often leaders in wage determination. This result is relevant from a policy perspective since wage spillovers, leading to costs growing faster than productivity, may affect the international cost competitiveness of the traded sector.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis assesses the question, whether accounting for non-tradable goods sectors in a calibrated Auerbach-Kotlikoff multi-regional overlapping-generations-model significantly affects this model’s results when simulating the economic impact of demographic change. Non-tradable goods constitute a major part of up to 80 percent of GDP of modern economies. At the same time, multi-regional overlapping-generations-models presented by literature on demographic change so far ignored their existence and counterfactually assumed perfect tradability between model regions. Moreover, this thesis introduces the assumption of an increasing preference share for non-tradable goods of old generations. This fact-based as-sumption is also not part of models in relevant literature. rnThese obvious simplifications of common models vis-à-vis reality notwithstanding, this thesis concludes that differences in results between a model featuring non-tradable goods and a common model with perfect tradability are very small. In other words, the common simplifi-cation of ignoring non-tradable goods is unlikely to lead to significant distortions in model results. rnIn order to ensure that differences in results between the ‘new’ model, featuring both non-tradable and tradable goods, and the common model solely reflect deviations due to the more realistic structure of the ‘new’ model, both models are calibrated to match exactly the same benchmark data and thus do not show deviations in their respective baseline steady states.rnA variation analysis performed in this thesis suggests that differences between the common model and a model with non-tradable goods can theoretically be large, but only if the bench-mark tradable goods sector is assumed to be unrealistically small.rnFinally, this thesis analyzes potential real exchange rate effects of demographic change, which could occur due to regional price differences of non-tradable goods. However, results show that shifts in real exchange rate based on these price differences are negligible.rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many industrial solids processes require the production of disperse particles. In industries such as food, personal care, and pharmaceuticals, particle formation is widely used to produce solid products or to separate substances in intermediate process steps. The most important characteristics known to impact the effectiveness of a solid product are purity, size, internal structure, and morphology. These characteristics are essential to maintain optimal operation of subsequent process steps and for obtaining the desired high quality product. This thesis aims to aid in the advancement of particle production technology by (1) investigating the use of a vibrating orifice aerosol generator (VOAG) for collecting data to predict particle attributes including morphology, size, and internal structure as a function of processing parameters such as solvent, solution concentration, air flow rate, and initial droplet size, as well as to (2) determine the extent to which uniform droplet evaporation can be a tool to achieve novel particle morphologies, controlled sizes, or internal structures (crystallinity and crystal form). Experimental results for succinic acid, L-serine, and L-glutamic acid suggest that particles of controlled characteristics can indeed be produced by this method. Analysis by scanning electron microscopy (SEM), nanoindentation, and X-ray diffraction (XRD) shows that various sizes, internal structures, and morphologies can be obtained using the VOAG. Furthermore, unique morphologies and unexpected internal structures were able to be achieved for succinic acid, providing an added benefit to particle formation by this method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The prevalence of Ventilated Improved Pit (VIP) latrines in Ghana suggests that the design must have a high user acceptance. The two key factors attributed to user acceptance of a VIP latrine over an alternative latrine design, such as the basic pit latrine, are its ability to remove foul odors and maintain low fly populations; both of which are a direct result of an adequate ventilation flow rate. Adequate ventilation for odorless conditions in a VIP latrine has been defined by the United Nations Development Program (UNDP) and the World Bank, as an air flow rate equivalent to 6 air changes per hour (6 ACH) of the superstructure’s air volume. Additionally, the UNDP determined that the three primary factors that affect ventilation are: 1) wind passing over the mouth of the vent pipe, 2) wind passing into the superstructure, and 3) solar radiation on to the vent pipe. Previous studies also indicate that vent pipes with larger diameters increase flow rates, and the application of carbonaceous materials to the pit sludge reduces odor and insect prevalence. Furthermore, proper design and construction is critical for the correct functioning of VIP latrines. Under-designing could cause problems with odor and insect control; over-designing would increase costs unnecessarily, thereby making it potentially unaffordable for benefactors to independently construct, repair or replace a VIP latrine. The present study evaluated the design of VIP latrines used by rural communities in the Upper West Region of Ghana with the focus of assessing adequate ventilation for odor removal and insect control. Thirty VIP latrines from six communities in the Upper West Region of Ghana were sampled. Each VIP latrine’s ventilation flow rate and micro-environment was measured using a hot-wire anemometer probe and portable weather station for a minimum of four hours. To capture any temporal or seasonal variations in ventilation, ten of the latrines were sampled monthly over the course of three months for a minimum of 12 hours. A latrine usage survey and a cost analysis were also conducted to further assess the VIP latrine as an appropriated technology for sustainable development in the Upper West Region. It was found that the average air flow rate over the entire sample set was 11.3 m3/hr. The minimum and maximum air flow rates were 0.0 m3/hr and 48.0 m3/hr respectively. Only 1 of the 30 VIP latrines (3%) was found to have an air flow rate greater than the UNDP-defined odorless condition of 6 ACH. Furthermore, 19 VIP latrines (63%) were found to have an average air flow rate of less than half the flow rate required to achieve 6 ACH. The dominant factors affecting ventilation flow rate were wind passing over the mouth of the vent pipe and air buoyancy forces, which were the effect of differences in temperature between the substructure and the ambient environment. Of 76 usable VIP latrines found in one community, 68.4% were in actual use. The cost of a VIP latrine was found to be equivalent to approximately 12% of the mean annual household income for Upper West Region inhabitants.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study uses wage data from the UBS Prices and Earnings survey to highlight Disparate Wages in a Globalized World from di↵erent perspectives. This wage data is characterised by remarkable consistency over the last 40 years, as well as unusual global comparability. In the first chapter we analyse the convergence hypothesis for purchasing power adjusted wages across the world for 1970 to 2009. The results provide solid evidence for the hypotheses of absolute and conditional convergence in real wages, with the key driver being faster overall growing wage levels in lower wage countries compared to higher wage countries. At the same time, the highest skilled professions have experienced the highest wage growth, while low skilled workers’ wages have lagged, thus no convergence in this sense is found between skill groups. In the second chapter we examine deviations in international wages from Factor Price Equalisation theory (FPE). Following an approach analogous to Engel (1993) we find that deviations from FPE are more likely driven by the higher variability of wages between countries than by the variability of di↵erent wages within countries. With regard to the traditional analysis of the real exchange rate and the Balassa-Samuelson assumptions our analysis points to a larger impact on the real exchange rate likely stemming from the movements in the real exchange rate of tradables, and only to a lesser extent from the lack of equalisation of wages within countries. In the third chapter our results show that India’s economic and trade liberalisation, starting in the early 1990s, had very di↵erential impacts on skill premia, both over time and over skill levels. The most striking result is the large increase in wage inequality of high-skilled versus low-skilled professions. Both the synthetic control group method and the di↵erence-in-di↵erences (DID) approach suggest that a significant part of this increase in wage inequality can be attributed to India’s liberalisation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The current international integration of financial markets provides a channel for currency depreciation to affect stock prices. Moreover, the recent financial crisis in Asia with its accompanying exchange rate volatility affords a case study to examine that channel. This paper applies a bivariate GARCH-M model of the reduced form of stock market returns to investigate empirically the effects of daily currency depreciation on stock market returns for five newly emerging East Asian stock markets during the Asian financial crisis. The evidence shows that the conditional variances of stock market returns and depreciation rates exhibit time-varying characteristics for all countries. Domestic currency depreciation and its uncertainty adversely affects stock market returns across countries. The significant effects of foreign exchange market events on stock market returns suggest that international fund managers who invest in the newly emerging East Asian stock markets must evaluate the value and stability of the domestic currency as a part of their stock market investment decisions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we consider the case for assigning tax revenues to Scotland, by which we mean that taxes levied on Scottish tax bases should be returned to the Scottish budget. The budget, however, would continue to be supplemented by transfers from the Westminster budget. This arrangement differs from the current situation whereby public spending is largely financed by a bloc grant from Westminster. Our suggestion falls short of full fiscal federalism for Scotland . meaning that Scotland had control over choice of tax base and of tax rates, and fiscal transfers from Westminster would be minimal. We use propositions drawn from the theory of fiscal federalism to argue for a smaller vertical imbalance between taxes retained in Scotland and public spending in Scotland. A closer matching of spending with taxes would better signal to beneficiaries the true costs of public spending in terms of taxes raised. It would also create more complete incentives for politicians to provide public goods and services in quantities and at qualities that voters are actually willing to pay for. Under the current bloc grant system, the marginal tax cost of spending does not enter into political agents. calculations as spending is out of a fixed total budget. Moreover, the Scottish electorate is hindered in signaling its desire for local public goods and services since the size of the total budget is determined by a rigid formula set by Westminster. At the present time we reject proposals for full fiscal federalism because in sharply reducing vertical imbalance in the Scottish budget, it is likely to worsen horizontal balance between Scotland and the other UK regions. Horizontal balance occurs where similarly situated regions enjoy the same per capita level of public goods and services at the same per capita tax cost. The complete removal of the bloc grant under full fiscal federalism would remove the mechanism that currently promotes horizontal equity in the UK. Variability in own-source tax revenues creates other problems with full fiscal federalism. Taxes derived from North Sea oil would constitute a large proportion of Scottish taxes, but these are known to be volatile in the face of variable oil prices and the pound-dollar exchange rate. At the present time variability in oil tax revenue is absorbed by Westminster. Scotland is insulated through the bloc grant. This risk sharing mechanism would be lost with full fiscal federalism. It is true that Scotland could turn to financial markets to tide itself over oil tax revenue downturns, but as a much smaller and less diversified financial entity than the UK as a whole it would probably have to borrow on less favorable terms than can Westminster. Scotland would have to bear this extra cost itself. Also, with full fiscal federalism it is difficult to see how the Scottish budget could be used as a macroeconomic stabilizer. At present, tax revenue downturns in Scotland - together with the steady bloc grant - are absorbed through an increase in vertical imbalance. This acts as an automatic stabilizer for the Scottish economy. No such mechanism would exist under full fiscal federalism. The borrowing alternative would still exist but on the less favorable terms - as with borrowing to finance oil tax shortfalls.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the extensive work on currency mismatches, research on the determinants and effects of maturity mismatches is scarce. In this paper I show that emerging market maturity mismatches are negatively affected by capital inflows and price volatilities. Furthermore, I find that banks with low maturity mismatches are more profitable during crisis periods but less profitable otherwise. The later result implies that banks face a tradeoff between higher returns and risk, hence channeling short term capital into long term loans is caused by cronyism and implicit guarantees rather than the depth of the financial market. The positive relationship between maturity mismatches and price volatility, on the other hand, shows that the banks of countries with high exchange rate and interest rate volatilities can not, or choose not to hedge themselves. These results follow from a panel regression on a data set I constructed by merging bank level data with aggregate data. This is advantageous over traditional studies which focus only on aggregate data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper tests the presence of balance sheets effects and analyzes the implications for exchange rate policies in emerging markets. The results reveal that the emerging market bond index (EMBI) is negatively related to the banks' foreign currency leverage, and that these banks' foreign currency exposures are relatively unhedged. Panel SVAR methods using EMBI instead of advanced country lending rates find, contrary to the literature, that the amplitude of output responses to foreign interest rate shocks are smaller under relatively fixed regimes. The findings are robust to the local projections method of obtaining impulse responses, using country specific and GARCH-SVAR models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper develops a short-run model of a small open financially repressed economy characterized by unorganized money markets, capital good imports, capital mobility, wage indexation, and flexible exchange rates. The analysis shows that financial liberalization, in the form of an increased rate of interest on deposits and tight monetary policy, unambiguously and unconditionally causes deflation. Moreover, the results do not depend on the degree of capital mobility and structure of wage setting. The paper recommends that a small open developing economy should deregulate interest rates and tighten monetary policy if reducing inflation is a priority. The pre-requisite for such a policy, however, requires the establishment of a flexible exchange rate regime.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop a portfolio balance model with real capital accumulation. The introduction of real capital as an asset as well as a good produced and demanded by firms enriches extant portfolio balance models of exchange rate determination. We show that expansionary monetary policy causes exchange rate overshooting, not once, but twice; the secondary repercussion comes through the reaction of firms to changed asset prices and the firms' decisions to invest in real capital. The model sheds further light on the volatility of real and nominal exchange rates, and it suggests that changes in corporate sector profitability may affect exchange rates through portfolio diversification in corporate securities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We discuss the effectiveness of pegged exchange rate regimes from an historical perspective, drawing conclusions for their effectiveness today. Starting with the classical gold standard period, we point out that a succession of pegged regimes have ended in failure; except for the first, which was ended by the outbreak of World War I, all of the others we discuss have been ended by adverse economic developments for which the regimes themselves were partly responsible. Prior to World War II the main problem was a shortage of monetary gold that we argue is implicated as a cause of the Great Depression. After World War II, more particularly from the late-1960s, the main problem has been a surfeit of the main international reserve asset, the US dollar. This has led to generalized inflation in the 1970s and into the 1980s. Today, excessive dollar international base money creation is again a problem that could have serious consequences for world economic stability.