19 resultados para emission trading
em Helda - Digital Repository of University of Helsinki
Resumo:
The study focuses on the potential roles of the brick making industries in Sudan in deforestation and greenhouse gas emission due to the consumption of biofuels. The results were based on the observation of 25 brick making industries from three administrative regions in Sudan namely, Khartoum, Kassala and Gezira. The methodological approach followed the procedures outlined by the Intergovernmental Panel on Climate Change (IPCC). For predicting a serious deforestation scenario, it was also assumed that all of wood use for this particular purpose is from unsustainable sources. The study revealed that the total annual quantity of fuelwood consumed by the surveyed brick making industries (25) was 2,381 t dm. Accordingly, the observed total potential deforested wood was 10,624 m3, in which the total deforested round wood was 3,664 m3 and deforested branches was 6,961 m3. The study observed that a total of 2,990 t biomass fuels (fuelwood and dung cake) consumed annually by the surveyed brick making industries for brick burning. Consequently, estimated total annual emissions of greenhouse gases were 4,832 t CO2, 21 t CH4, 184 t CO, 0.15 t N20, 5 t NOX and 3.5 t NO while the total carbon released in the atmosphere was 1,318 t. Altogether, the total annual greenhouse gases emissions from biomass fuels burning was 5,046 t; of which 4,104 t from fuelwood and 943 t from dung cake burning. According to the results, due to the consumption of fuelwood in the brick making industries (3,450 units) of Sudan, the amount of wood lost from the total growing stock of wood in forests and trees in Sudan annually would be 1,466,000 m3 encompassing 505,000 m3 round wood and 961,000 m3 branches annually. By considering all categories of biofuels (fuelwood and dung cake), it was estimated that, the total emissions from all the brick making industries of Sudan would be 663,000 t CO2, 2,900 t CH4, 25,300 t CO, 20 t N2O, 720 t NOX and 470 t NO per annum, while the total carbon released in the atmosphere would be 181,000 t annually.
Resumo:
The objective of this thesis is to find out how dominant firms in a liberalised electricity market will react when they face an increase in the level of costs due to emissions trading, and how this will effect the price of electricity. The Nordic electricity market is chosen as the setting in which to examine the question, since recent studies on the subject suggest that interaction between electricity markets and emissions trading is very much dependent on conditions specific to each market area. There is reason to believe that imperfect competition prevails in the Nordic market, thus the issue is approached through the theory of oligopolistic competition. The generation capacity available at the market, marginal cost of electricity production and seasonal levels of demand form the data based on which the dominant firms are modelled using the Cournot model of competition. The calculations are made for two levels of demand, high and low, and with several values of demand elasticity. The producers are first modelled under no carbon costs and then by adding the cost of carbon dioxide at 20€/t to those technologies subject to carbon regulation. In all cases the situation under perfect competition is determined as a comparison point for the results of the Cournot game. The results imply that the potential for market power does exist on the Nordic market, but the possibility for exercising market power depends on the demand level. In season of high demand the dominant firms may raise the price significantly above competitive levels, and the situation is aggravated when the cost of carbon dioixide is accounted for. Under low demand leves there is no difference between perfect and imperfect competition. The results are highly dependent on the price elasticity of demand.
Resumo:
This thesis studies the informational efficiency of the European Union emission allowance (EUA) market. In an efficient market, the market price is unpredictable and profits above average are impossible in the long run. The main research problem is does the EUA price follow a random walk. The method is an econometric analysis of the price series, which includes an autocorrelation coefficient test and a variance ratio test. The results reveal that the price series is autocorrelated and therefore a nonrandom walk. In order to find out the extent of predictability, the price series is modelled with an autoregressive model. The conclusion is that the EUA price is autocorrelated only to a small degree and that the predictability cannot be used to make extra profits. The EUA market is therefore considered informationally efficient, although the price series does not fulfill the requirements of a random walk. A market review supports the conclusion, but it is clear that the maturing of the market is still in process.
Resumo:
Interstellar clouds are not featureless, but show quite complex internal structures of filaments and clumps when observed with high enough resolution. These structures have been generated by 1) turbulent motions driven mainly by supernovae, 2) magnetic fields working on the ions and, through neutral-ion collisions, on neutral gas as well, and 3) self-gravity pulling a dense clump together to form a new star. The study of the cloud structure gives us information on the relative importance of each of these mechanisms, and helps us to gain a better understanding of the details of the star formation process. Interstellar dust is often used as a tracer for the interstellar gas which forms the bulk of the interstellar matter. Some of the methods that are used to derive the column density are summarized in this thesis. A new method, which uses the scattered light to map the column density in large fields with high spatial resolution, is introduced. This thesis also takes a look at the grain alignment with respect to the magnetic fields. The aligned grains give rise to the polarization of starlight and dust emission, thus revealing the magnetic field. The alignment mechanisms have been debated for the last half century. The strongest candidate at present is the radiative torques mechanism. In the first four papers included in this thesis, the scattered light method of column density estimation is formulated, tested in simulations, and finally used to obtain a column density map from observations. They demonstrate that the scattered light method is a very useful and reliable tool in column density estimation, and is able to provide higher resolution than the near-infrared color excess method. These two methods are complementary. The derived column density maps are also used to gain information on the dust emissivity within the observed cloud. The two final papers present simulations of polarized thermal dust emission assuming that the alignment happens by the radiative torques mechanism. We show that the radiative torques can explain the observed decline of the polarization degree towards dense cores. Furthermore, the results indicate that the dense cores themselves might not contribute significantly to the polarized signal, and hence one needs to be careful when interpreting the observations and deriving the magnetic field.
Resumo:
Emissions of coal combustion fly ash through real scale ElectroStatic Precipitators (ESP) were studied in different coal combustion and operation conditions. Sub-micron fly-ash aerosol emission from a power plant boiler and the ESP were determined and consequently the aerosol penetration, as based on electrical mobility measurements, thus giving thereby an indication for an estimate on the size and the maximum extent that the small particles can escape. The experimentals indicate a maximum penetration of 4% to 20 % of the small particles, as counted on number basis instead of the normally used mass basis, while simultaneously the ESP is operating at a nearly 100% collection efficiency on mass basis. Although the size range as such seems to appear independent of the coal, of the boiler or even of the device used for the emission control, the maximum penetration level on the number basis depends on the ESP operating parameters. The measured emissions were stable during stable boiler operation for a fired coal, and the emissions seemed each to be different indicating that the sub-micron size distribution of the fly-ash could be used as a specific characteristics for recognition, for instance for authenticity, provided with an indication of known stable operation. Consequently, the results on the emissions suggest an optimum particle size range for environmental monitoring in respect to the probability of finding traces from the samples. The current work embodies also an authentication system for aerosol samples for post-inspection from any macroscopic sample piece. The system can comprise newly introduced new devices, for mutually independent use, or, for use in a combination with each other, as arranged in order to promote the sampling operation length and/or the tag selection diversity. The tag for the samples can be based on naturally occurring measures and/or added measures of authenticity in a suitable combination. The method involves not only military related applications but those in civil industries as well. Alternatively to the samples, the system can be applied to ink for note printing or other monetary valued papers, but also in a filter manufacturing for marking fibrous filters.
Resumo:
Luonnosta haihtuvat orgaaniset yhdisteet, joita pääsee ilmaan etenkin metsistä, voivat vaikuttaa paikalliseen ja alueelliseen ilmanlaatuun, koska ne reagoivat ilmakehässä. Niiden reaktiotuotteet voivat myös osallistua uusien hiukkasten muodostumiseen ja kasvuun, millä voi olla vaikutusta ilmakehän säteilytaseeseen ja tätä kautta myös ilmastoon. Hiukkaset absorboivat ja sirottavat auringon säteilyä ja maapallon lämpösäteilyä minkä lisäksi ne vaikuttavat pilvien säteilyominaisuuksiin, määrään ja elinikään. Koko maapallon mittakaavassa luonnosta tulevat hiilivetypäästöt ylittävät ihmistoiminnan aiheuttamat päästöt moninkertaisesti. Tämän vuoksi luonnon päästöjen arviointi on tärkeää kun halutaan kehittää tehokkaita ilmanlaatu- ja ilmastostrategioita. Tämä tutkimus käsittelee boreaalisen metsän hiilivetypäästöjä. Boreaalinen metsä eli pohjoinen havumetsä on suurin maanpäällinen ekosysteemi, ja se ulottuu lähes yhtenäisenä nauhana koko pohjoisen pallonpuoliskon ympäri. Sille on tyypillistä puulajien suhteellisen pieni kirjo sekä olosuhteiden ja kasvun voimakkaat vuodenaikaisvaihtelut. Työssä on tutkittu Suomen yleisimmän boreaalisen puun eli männyn hiilivetypäästöjen vuodenaikaisvaihtelua sekä päästöjen riippuvuutta lämpötilasta ja valosta. Saatuja tuloksia on käytetty yhdessä muiden boreaalisilla puilla tehtyjen päästömittaustulosten kanssa Suomen metsiä varten kehitetyssä päästömallissa. Malli perustuu lisäksi maankäyttötietoihin, suomen metsille kehitettyyn luokitukseen ja meteorologisiin tietoihin, joiden avulla se laskee metsien hiilivetypäästöt kasvukauden aikana. Suomen metsien päästöt koostuvat koko kasvukauden ajan suurelta osin alfa- ja beta-pineenistä sekä delta-kareenista. Kesällä ja syksyllä päästöissä on myös paljon sabineenia, jota tulee etenkin lehtipuista. Päästöt seuraavat lämpötilan keskimääräistä vaihtelua, ovat suurimmillaan maan eteläosissa ja laskevat tasaisesti pohjoiseen siirryttäessä. Metsän isopreenipäästö on suhteellisen pieni – Suomessa tärkein isopreeniä päästävä puu on vähäpäästöinen kuusi, koska runsaspäästöisten pajun ja haavan osuus metsän lehtimassasta on hyvin pieni. Tässä työssä on myös laskettu ensimmäinen arvio metsän seskviterpeenipäästöistä. Seskviterpeenipäästöt alkavat Juhannuksen jälkeen ja ovat kasvukauden aikana samaa suuruusluokkaa kuin isopreenipäästöt. Vuositasolla Suomen metsien hiilivetypäästöt ovat noin kaksinkertaiset ihmistoiminnasta aiheutuviin päästöihin verrattuna.
Resumo:
X-ray Raman scattering and x-ray emission spectroscopies were used to study the electronic properties and phase transitions in several condensed matter systems. The experimental work, carried out at the European Synchrotron Radiation Facility, was complemented by theoretical calculations of the x-ray spectra and of the electronic structure. The electronic structure of MgB2 at the Fermi level is dominated by the boron σ and π bands. The high density of states provided by these bands is the key feature of the electronic structure contributing to the high critical temperature of superconductivity in MgB2. The electronic structure of MgB2 can be modified by atomic substitutions, which introduce extra electrons or holes into the bands. X ray Raman scattering was used to probe the interesting σ and π band hole states in pure and aluminum substituted MgB2. A method for determining the final state density of electron states from experimental x-ray Raman scattering spectra was examined and applied to the experimental data on both pure MgB2 and on Mg(0.83)Al(0.17)B2. The extracted final state density of electron states for the pure and aluminum substituted samples revealed clear substitution induced changes in the σ and π bands. The experimental work was supported by theoretical calculations of the electronic structure and x-ray Raman spectra. X-ray emission at the metal Kβ line was applied to the studies of pressure and temperature induced spin state transitions in transition metal oxides. The experimental studies were complemented by cluster multiplet calculations of the electronic structure and emission spectra. In LaCoO3 evidence for the appearance of an intermediate spin state was found and the presence of a pressure induced spin transition was confirmed. Pressure induced changes in the electronic structure of transition metal monoxides were studied experimentally and were analyzed using the cluster multiplet approach. The effects of hybridization, bandwidth and crystal field splitting in stabilizing the high pressure spin state were discussed. Emission spectroscopy at the Kβ line was also applied to FeCO3 and a pressure induced iron spin state transition was discovered.
Resumo:
"The increasing pressure for enterprises to join into agile business networks is changing the requirements on the enterprise computing systems. The supporting infrastructure is increasingly required to provide common facilities and societal infrastructure services to support the lifecycle of loosely-coupled, eContract-governed business networks. The required facilities include selection of those autonomously administered business services that the enterprises are prepared to provide and use, contract negotiations, and furthermore, monitoring of the contracted behaviour with potential for breach management. The essential change is in the requirement of a clear mapping between business-level concepts and the automation support for them. Our work has focused on developing B2B middleware to address the above challenges; however, the architecture is not feasible without management facilities for trust-aware decisions for entering business networks and interacting within them. This paper discusses how trust-based decisions are supported and positioned in the B2B middleware."
Resumo:
Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.
Resumo:
Market microstructure is “the study of the trading mechanisms used for financial securities” (Hasbrouck (2007)). It seeks to understand the sources of value and reasons for trade, in a setting with different types of traders, and different private and public information sets. The actual mechanisms of trade are a continually changing object of study. These include continuous markets, auctions, limit order books, dealer markets, or combinations of these operating as a hybrid market. Microstructure also has to allow for the possibility of multiple prices. At any given time an investor may be faced with a multitude of different prices, depending on whether he or she is buying or selling, the quantity he or she wishes to trade, and the required speed for the trade. The price may also depend on the relationship that the trader has with potential counterparties. In this research, I touch upon all of the above issues. I do this by studying three specific areas, all of which have both practical and policy implications. First, I study the role of information in trading and pricing securities in markets with a heterogeneous population of traders, some of whom are informed and some not, and who trade for different private or public reasons. Second, I study the price discovery of stocks in a setting where they are simultaneously traded in more than one market. Third, I make a contribution to the ongoing discussion about market design, i.e. the question of which trading systems and ways of organizing trading are most efficient. A common characteristic throughout my thesis is the use of high frequency datasets, i.e. tick data. These datasets include all trades and quotes in a given security, rather than just the daily closing prices, as in traditional asset pricing literature. This thesis consists of four separate essays. In the first essay I study price discovery for European companies cross-listed in the United States. I also study explanatory variables for differences in price discovery. In my second essay I contribute to earlier research on two issues of broad interest in market microstructure: market transparency and informed trading. I examine the effects of a change to an anonymous market at the OMX Helsinki Stock Exchange. I broaden my focus slightly in the third essay, to include releases of macroeconomic data in the United States. I analyze the effect of these releases on European cross-listed stocks. The fourth and last essay examines the uses of standard methodologies of price discovery analysis in a novel way. Specifically, I study price discovery within one market, between local and foreign traders.
Resumo:
Liquidity, or how easy an investment is to buy or sell, is becoming increasingly important for financial market participants. The objective of this dissertation is to contribute to the understanding of how liquidity affects financial markets. The first essays analyze the actions taken by underwriters immediately after listing to improve liquidity of IPO stock. To estimate the impact of underwriter activity on the pricing of the IPOs, the order book during the first weeks of trading in the IPO stock is studied. Evidence of stabilization and liquidity enhancing activities by underwriters is found. The second half of the dissertation is concerned with the daily trading of stocks where liquidity may be impacted by policy issues such as changes in taxes or exchange fees and by opening the access to the markets for foreign investors. The desirability of a transaction tax on securities trading is addressed. An increase in transaction tax is found to cause lower prices and higher volatility. In the last essay the objective is to determine if the liquidity of a security has an impact on the return investors require. The results support the notion that returns are negatively correlated to liquidity.
Resumo:
This paper uses the Value-at-Risk approach to define the risk in both long and short trading positions. The investigation is done on some major market indices(Japanese, UK, German and US). The performance of models that takes into account skewness and fat-tails are compared to symmetric models in relation to both the specific model for estimating the variance, and the distribution of the variance estimate used as input in the VaR estimation. The results indicate that more flexible models not necessarily perform better in predicting the VaR forecast; the reason for this is most probably the complexity of these models. A general result is that different methods for estimating the variance are needed for different confidence levels of the VaR, and for the different indices. Also, different models are to be used for the left respectively the right tail of the distribution.