15 resultados para Semi-Empirical Methods

em Helda - Digital Repository of University of Helsinki


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In lake-rich regions, the gathering of information about water quality is challenging because only a small proportion of the lakes can be assessed each year by conventional methods. One of the techniques for improving the spatial and temporal representativeness of lake monitoring is remote sensing from satellites and aircrafts. The experimental material included detailed optical measurements in 11 lakes, air- and spaceborne remote sensing measurements with concurrent field sampling, automatic raft measurements and a national dataset of routine water quality measurements from over 1100 lakes. The analyses of the spatially high-resolution airborne remote sensing data from eutrophic and mesotrophic lakes showed that one or a few discrete water quality observations using conventional monitoring can yield a clear over- or underestimation of the overall water quality in a lake. The use of TM-type satellite instruments in addition to routine monitoring results substantially increases the number of lakes for which water quality information can be obtained. The preliminary results indicated that coloured dissolved organic matter (CDOM) can be estimated with TM-type satellite instruments, which could possibly be utilised as an aid in estimating the role of lakes in global carbon budgets. Based on the results of reflectance modelling and experimental data, MERIS satellite instrument has optimal or near-optimal channels for the estimation of turbidity, chlorophyll a and CDOM in Finnish lakes. MERIS images with a 300 m spatial resolution can provide water quality information in different parts of large and medium-sized lakes, and in filling in the gaps resulting from conventional monitoring. Algorithms that would not require simultaneous field data for algorithm training would increase the amount of remote sensing-based information available for lake monitoring. The MERIS Boreal Lakes processor, trained with the optical data and concentration ranges provided by this study, enabled turbidity estimations with good accuracy without the need for algorithm correction with field measurements, while chlorophyll a and CDOM estimations require further development of the processor. The accuracy of interpreting chlorophyll a via semi empirical algorithms can be improved by classifying lakes prior to interpretation according to their CDOM level and trophic status. Optical modelling indicated that the spectral diffuse attenuation coefficient can be estimated with reasonable accuracy from the measured water quality concentrations. This provides more detailed information on light attenuation from routine monitoring measurements than is available through the Secchi disk transparency. The results of this study improve the interpretation of lake water quality by remote sensing and encourage the use of remote sensing in lake monitoring.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the eighteenth century, the birth of scientific societies in Europe created a new framework for scientific cooperation. Through a new contextualist study of the contacts between the first scientific societies in Sweden and the most important science academy in Europe at the time, l Académie des Sciences in Paris, this dissertation aims to shed light on the role taken by the Swedish learned men in the new networks. It seeks to show that the academy model was related to a new idea of specialisation in science. In the course of the eighteenth century, it is argued, the study of the northern phenomena and regions offered the Swedes an important field of speciality with regard to their foreign colleagues. Although historical studies have often underlined the economic, practical undertone of eighteenth-century Swedish science, participation in fashionable scientific pursuits had also become an important scene for representation. However, the views prevailing in Europe tied civilisation and learning closely to the sunnier, southern climates, which had lead to the difficulty of portraying Sweden as a learned country. The image of the scientific North, as well as the Swedish strategies to polish the image of the North as a place for science, are analysed as seen from France. While sixteenth-century historians had preferred to put down the effects of the cold and claim a similarity of northern conditions to the others, the scientific exchange between Swedish and French researchers shows a new tendency to underline the difference of the North and its harsh climate. An explanation is sought by analysing how information about northern phenomena was used in France. In the European academies, new empirical methods had lead to a need for direct observations on different phenomena and circumstances. Rather than curiosities or objects for exoticism, the eighteenth-century depictions of the northern periphery tell about an emerging interest in the most extreme, and often most telling, examples of the workings of the invariable laws of nature. Whereas the idea of accumulating knowledge through cooperation was most manifest in joint astronomical projects, the idea of gathering and comparing data from differing places of observation appears also in other fields, from experimental philosophy to natural studies or medicine. The effects of these developments are studied and explained in connection to the Montesquieuan climate theories and the emerging pre-romantic ideas of man and society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Controlled nuclear fusion is one of the most promising sources of energy for the future. Before this goal can be achieved, one must be able to control the enormous energy densities which are present in the core plasma in a fusion reactor. In order to be able to predict the evolution and thereby the lifetime of different plasma facing materials under reactor-relevant conditions, the interaction of atoms and molecules with plasma first wall surfaces have to be studied in detail. In this thesis, the fundamental sticking and erosion processes of carbon-based materials, the nature of hydrocarbon species released from plasma-facing surfaces, and the evolution of the components under cumulative bombardment by atoms and molecules have been investigated by means of molecular dynamics simulations using both analytic potentials and a semi-empirical tight-binding method. The sticking cross-section of CH3 radicals at unsaturated carbon sites at diamond (111) surfaces is observed to decrease with increasing angle of incidence, a dependence which can be described by a simple geometrical model. The simulations furthermore show the sticking cross-section of CH3 radicals to be strongly dependent on the local neighborhood of the unsaturated carbon site. The erosion of amorphous hydrogenated carbon surfaces by helium, neon, and argon ions in combination with hydrogen at energies ranging from 2 to 10 eV is studied using both non-cumulative and cumulative bombardment simulations. The results show no significant differences between sputtering yields obtained from bombardment simulations with different noble gas ions. The final simulation cells from the 5 and 10 eV ion bombardment simulations, however, show marked differences in surface morphology. In further simulations the behavior of amorphous hydrogenated carbon surfaces under bombardment with D^+, D^+2, and D^+3 ions in the energy range from 2 to 30 eV has been investigated. The total chemical sputtering yields indicate that molecular projectiles lead to larger sputtering yields than atomic projectiles. Finally, the effect of hydrogen ion bombardment of both crystalline and amorphous tungsten carbide surfaces is studied. Prolonged bombardment is found to lead to the formation of an amorphous tungsten carbide layer, regardless of the initial structure of the sample. In agreement with experiment, preferential sputtering of carbon is observed in both the cumulative and non-cumulative simulations

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aerosols impact the planet and our daily lives through various effects, perhaps most notably those related to their climatic and health-related consequences. While there are several primary particle sources, secondary new particle formation from precursor vapors is also known to be a frequent, global phenomenon. Nevertheless, the formation mechanism of new particles, as well as the vapors participating in the process, remain a mystery. This thesis consists of studies on new particle formation specifically from the point of view of numerical modeling. A dependence of formation rate of 3 nm particles on the sulphuric acid concentration to the power of 1-2 has been observed. This suggests nucleation mechanism to be of first or second order with respect to the sulphuric acid concentration, in other words the mechanisms based on activation or kinetic collision of clusters. However, model studies have had difficulties in replicating the small exponents observed in nature. The work done in this thesis indicates that the exponents may be lowered by the participation of a co-condensing (and potentially nucleating) low-volatility organic vapor, or by increasing the assumed size of the critical clusters. On the other hand, the presented new and more accurate method for determining the exponent indicates high diurnal variability. Additionally, these studies included several semi-empirical nucleation rate parameterizations as well as a detailed investigation of the analysis used to determine the apparent particle formation rate. Due to their high proportion of the earth's surface area, oceans could potentially prove to be climatically significant sources of secondary particles. In the lack of marine observation data, new particle formation events in a coastal region were parameterized and studied. Since the formation mechanism is believed to be similar, the new parameterization was applied in a marine scenario. The work showed that marine CCN production is feasible in the presence of additional vapors contributing to particle growth. Finally, a new method to estimate concentrations of condensing organics was developed. The algorithm utilizes a Markov chain Monte Carlo method to determine the required combination of vapor concentrations by comparing a measured particle size distribution with one from an aerosol dynamics process model. The evaluation indicated excellent agreement against model data, and initial results with field data appear sound as well.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Man-induced climate change has raised the need to predict the future climate and its feedback to vegetation. These are studied with global climate models; to ensure the reliability of these predictions, it is important to have a biosphere description that is based upon the latest scientific knowledge. This work concentrates on the modelling of the CO2 exchange of the boreal coniferous forest, studying also the factors controlling its growing season and how these can be used in modelling. In addition, the modelling of CO2 gas exchange at several scales was studied. A canopy-level CO2 gas exchange model was developed based on the biochemical photosynthesis model. This model was first parameterized using CO2 exchange data obtained by eddy covariance (EC) measurements from a Scots pine forest at Sodankylä. The results were compared with a semi-empirical model that was also parameterized using EC measurements. Both of the models gave satisfactory results. The biochemical canopy-level model was further parameterized at three other coniferous forest sites located in Finland and Sweden. At all the sites, the two most important biochemical model parameters showed seasonal behaviour, i.e., their temperature responses changed according to the season. Modelling results were improved when these changeover dates were related to temperature indices. During summer-time the values of the biochemical model parameters were similar at all the four sites. Different control factors for CO2 gas exchange were studied at the four coniferous forests, including how well these factors can be used to predict the initiation and cessation of the CO2 uptake. Temperature indices, atmospheric CO2 concentration, surface albedo and chlorophyll fluorescence (CF) were all found to be useful and have predictive power. In addition, a detailed simulation study of leaf stomata in order to separate physical and biochemical processes was performed. The simulation study brought to light the relative contribution and importance of the physical transport processes. The results of this work can be used in improving CO2 gas exchange models in boreal coniferous forests. The meteorological and biological variables that represent the seasonal cycle were studied, and a method for incorporating this cycle into a biochemical canopy-level model was introduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis discusses the use of sub- and supercritical fluids as the medium in extraction and chromatography. Super- and subcritical extraction was used to separate essential oils from herbal plant Angelica archangelica. The effect of extraction parameters was studied and sensory analyses of the extracts were done by an expert panel. The results of the sensory analyses were compared to the analytically determined contents of the extracts. Sub- and supercritical fluid chromatography (SFC) was used to separate and purify high-value pharmaceuticals. Chiral SFC was used to separate the enantiomers of racemic mixtures of pharmaceutical compounds. Very low (cryogenic) temperatures were applied to substantially enhance the separation efficiency of chiral SFC. The thermodynamic aspects affecting the resolving ability of chiral stationary phases are briefly reviewed. The process production rate which is a key factor in industrial chromatography was optimized by empirical multivariate methods. General linear model was used to optimize the separation of omega-3 fatty acid ethyl esters from esterized fish oil by using reversed-phase SFC. Chiral separation of racemic mixtures of guaifenesin and ferulic acid dimer ethyl ester was optimized by using response surface method with three variables per time. It was found that by optimizing four variables (temperature, load, flowate and modifier content) the production rate of the chiral resolution of racemic guaifenesin by cryogenic SFC could be increased severalfold compared to published results of similar application. A novel pressure-compensated design of industrial high pressure chromatographic column was introduced, using the technology developed in building the deep-sea submersibles (Mir 1 and 2). A demonstration SFC plant was built and the immunosuppressant drug cyclosporine A was purified to meet the requirements of US Pharmacopoeia. A smaller semi-pilot size column with similar design was used for cryogenic chiral separation of aromatase inhibitor Finrozole for use in its development phase 2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modifications of surface materials and their effects on cleanability have important impacts in many fields of activity. In this study the primary aim was to develop radiochemical methods suitable for evaluating cleanability in material research for different environments. Another aim was to investigate the effects of surface modifications on cleanabilitity and surface properties of plastics, ceramics, concrete materials and also their coatings in conditions simulating their typical environments. Several new 51Cr and 14C labelled soils were developed for testing situations. The new radiochemical methods developed were suitable for examining different surface materials and different soil types, providing quantitative information about the amount of soil on surfaces. They also take into account soil soaked into surfaces. The supporting methods colorimetric determination and ATP bioluminescence provided semi-quantitative results. The results from the radiochemical and supporting methods partly correlated with each other. From a material research point of view numerous new materials were evaluated. These included both laboratory-made model materials and commercial products. Increasing the amount of plasticizer decreased the cleanability of poly(vinyl chloride) (PVC) materials. Microstructured surfaces of plastics improved the cleanability of PVC from particle soils, whereas for oil soil microstructuring reduced the cleanability. In the case of glazed ceramic materials, coatings affected the cleanability. The roughness of surfaces correlated with cleanability from particle soils and the cleanability from oil soil correlated with the contact angles. Organic particle soil was removed more efficiently from TiO2-coated ceramic surfaces after UV-radiation than without UV treatment, whereas no effect was observed on the cleanability of oil soil. Coatings improved the cleanability of concrete flooring materials intended for use in animal houses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A vast amount of public services and goods are contracted through procurement auctions. Therefore it is very important to design these auctions in an optimal way. Typically, we are interested in two different objectives. The first objective is efficiency. Efficiency means that the contract is awarded to the bidder that values it the most, which in the procurement setting means the bidder that has the lowest cost of providing a service with a given quality. The second objective is to maximize public revenue. Maximizing public revenue means minimizing the costs of procurement. Both of these goals are important from the welfare point of view. In this thesis, I analyze field data from procurement auctions and show how empirical analysis can be used to help design the auctions to maximize public revenue. In particular, I concentrate on how competition, which means the number of bidders, should be taken into account in the design of auctions. In the first chapter, the main policy question is whether the auctioneer should spend resources to induce more competition. The information paradigm is essential in analyzing the effects of competition. We talk of a private values information paradigm when the bidders know their valuations exactly. In a common value information paradigm, the information about the value of the object is dispersed among the bidders. With private values more competition always increases the public revenue but with common values the effect of competition is uncertain. I study the effects of competition in the City of Helsinki bus transit market by conducting tests for common values. I also extend an existing test by allowing bidder asymmetry. The information paradigm seems to be that of common values. The bus companies that have garages close to the contracted routes are influenced more by the common value elements than those whose garages are further away. Therefore, attracting more bidders does not necessarily lower procurement costs, and thus the City should not implement costly policies to induce more competition. In the second chapter, I ask how the auctioneer can increase its revenue by changing contract characteristics like contract sizes and durations. I find that the City of Helsinki should shorten the contract duration in the bus transit auctions because that would decrease the importance of common value components and cheaply increase entry which now would have a more beneficial impact on the public revenue. Typically, cartels decrease the public revenue in a significant way. In the third chapter, I propose a new statistical method for detecting collusion and compare it with an existing test. I argue that my test is robust to unobserved heterogeneity unlike the existing test. I apply both methods to procurement auctions that contract snow removal in schools of Helsinki. According to these tests, the bidding behavior of two of the bidders seems consistent with a contract allocation scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semi-natural grasslands are the most important agricultural areas for biodiversity. The present study investigates the effects of traditional livestock grazing and mowing on plant species richness, the main emphasis being on cattle grazing in mesic semi-natural grasslands. The two reviews provide a thorough assessment of the multifaceted impacts and importance of grazing and mowing management to plant species richness. It is emphasized that livestock grazing and mowing have partially compensated the suppression of major natural disturbances by humans and mitigated the negative effects of eutrophication. This hypothesis has important consequences for nature conservation: A large proportion of European species originally adapted to natural disturbances may be at present dependent on livestock grazing and / or mowing. Furthermore, grazing and mowing are key management methods to mitigate effects of nutrient-enrichment. The species composition and richness in old (continuously grazed), new (grazing restarting 3-8 years ago) and abandoned (over 10 years) pastures differed consistently across a range of spatial scales, and was intermediate in new pastures compared to old and abandoned pastures. In mesic grasslands most plant species were shown to benefit from cattle grazing. Indicator species of biologically valuable grasslands and rare species were more abundant in grazed than in abandoned grasslands. Steep S-SW-facing slopes are the most suitable sites for many grassland plants and should be prioritized in grassland restoration. The proportion of species trait groups benefiting from grazing was higher in mesic semi-natural grasslands than in dry and wet grasslands. Consequently, species trait responses to grazing and the effectiveness of the natural factors limiting plant growth may be intimately linked High plant species richness of traditionally mowed and grazed areas is explained by numerous factors which operate on different spatial scales. Particularly important for maintaining large scale plant species richness are evolutionary and mitigation factors. Grazing and mowing cause a shift towards the conditions that have occurred during the evolutionary history of European plant species by modifying key ecological factors (nutrients, pH and light). The results of this Dissertation suggest that restoration of semi-natural grasslands by private farmers is potentially a useful method to manage biodiversity in the agricultural landscape. However, the quality of management is commonly improper, particularly due to financial constraints. For enhanced success of restoration, management regulations in the agri-environment scheme need to be defined more explicitly and the scheme should be revised to encourage management of biodiversity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Governance has been one of the most popular buzzwords in recent political science. As with any term shared by numerous fields of research, as well as everyday language, governance is encumbered by a jungle of definitions and applications. This work elaborates on the concept of network governance. Network governance refers to complex policy-making situations, where a variety of public and private actors collaborate in order to produce and define policy. Governance is processes of autonomous, self-organizing networks of organizations exchanging information and deliberating. Network governance is a theoretical concept that corresponds to an empirical phenomenon. Often, this phenomenon is used to descirbe a historical development: governance is often used to describe changes in political processes of Western societies since the 1980s. In this work, empirical governance networks are used as an organizing framework, and the concepts of autonomy, self-organization and network structure are developed as tools for empirical analysis of any complex decision-making process. This work develops this framework and explores the governance networks in the case of environmental policy-making in the City of Helsinki, Finland. The crafting of a local ecological sustainability programme required support and knowledge from all sectors of administration, a number of entrepreneurs and companies and the inhabitants of Helsinki. The policy process relied explicitly on networking, with public and private actors collaborating to design policy instruments. Communication between individual organizations led to the development of network structures and patterns. This research analyses these patterns and their effects on policy choice, by applying the methods of social network analysis. A variety of social network analysis methods are used to uncover different features of the networked process. Links between individual network positions, network subgroup structures and macro-level network patterns are compared to the types of organizations involved and final policy instruments chosen. By using governance concepts to depict a policy process, the work aims to assess whether they contribute to models of policy-making. The conclusion is that the governance literature sheds light on events that would otherwise go unnoticed, or whose conceptualization would remain atheoretical. The framework of network governance should be in the toolkit of the policy analyst.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bangladesh, often better known to the outside world as a country of natural calamities, is one of the most densely populated countries in the world. Despite rapid urbanization, more than 75% of the people still live in rural areas. The density of the rural population is also one of the highest in the world. Being a poor and low-income country, its main challenge is to eradicate poverty through increasing equitable income. Since its independence in 1971, Bangladesh has experienced many ups and downs, but over the past three decades, its gross domestic product (GDP) has grown at an impressive rate. Consequently, the country s economy is developing and the country has outperformed many low-income countries in terms of several social indicators. Bangladesh has achieved the Millennium Development Goal (MDG) of eliminating gender disparity in primary and secondary school enrollment. A sharp decline in child and infant mortality rates, increased per capita income, and improved food security have placed Bangladesh on the track to achieving in the near future the status of a middle-income country. All these developments have influenced the consumption pattern of the country. This study explores the consumption scenario of rural Bangladesh, its changing consumption patterns, the relationship between technology and consumption in rural Bangladesh, cultural consumption in rural Bangladesh, and the myriad reasons why consumers nevertheless feel compelled to consume chemically treated foods. Data were collected in two phases in the summers of 2006 and 2008. In 2006, the empirical data were collected from the following three sources: interviews with consumers, producers/sellers, and doctors and pharmacists; observations of sellers/producers; and reviews of articles published in the national English and Bengali (the national language of Bangladesh) daily newspapers. A total of 110 consumers, 25 sellers/producers, 7 doctors, and 7 pharmacists were interviewed and observed. In 2008, data were collected through semi-structured in-depth qualitative interviews, ethnography, and unstructured conversations substantiated by secondary sources and photographs; the total number of persons interviewed was 22. -- Data were also collected on the consumption of food, clothing, housing, education, medical facilities, marriage and dowry, the division of labor, household decision making, different festivals such as Eid (for Muslims), the Bengali New Year, and Durga puja (for Hindus), and leisure. Qualitative methods were applied to the data analysis and were supported by secondary quantitative data. The findings of this study suggest that the consumption patterns of rural Bangladeshis are changing over time along with economic and social development, and that technology has rendered aspects of daily life more convenient. This study identified the perceptions and experiences of rural people regarding technologies in use and explored how culture is associated with consumption. This study identified the reasons behind the use of hazardous chemicals (e.g. calcium carbide, sodium cyclamate, cyanide and formalin, etc.) in foods as well as the extent to which food producers/sellers used such chemicals. In addition, this study assessed consumer perceptions of and attitudes toward these contaminated food items and explored how adulterated foods and food stuffs affect consumer health. This study also showed that consumers were aware that various foods and food stuffs contained hazardous chemicals, and that these adulterated foods and food stuffs were harmful to their health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper uses the Value-at-Risk approach to define the risk in both long and short trading positions. The investigation is done on some major market indices(Japanese, UK, German and US). The performance of models that takes into account skewness and fat-tails are compared to symmetric models in relation to both the specific model for estimating the variance, and the distribution of the variance estimate used as input in the VaR estimation. The results indicate that more flexible models not necessarily perform better in predicting the VaR forecast; the reason for this is most probably the complexity of these models. A general result is that different methods for estimating the variance are needed for different confidence levels of the VaR, and for the different indices. Also, different models are to be used for the left respectively the right tail of the distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies the effect of income inequality on economic growth. This is done by analyzing panel data from several countries with both short and long time dimensions of the data. Two of the chapters study the direct effect of inequality on growth, and one chapter also looks at the possible indirect effect of inequality on growth by assessing the effect of inequality on savings. In Chapter two, the effect of inequality on growth is studied by using a panel of 70 countries and a new EHII2008 inequality measure. Chapter contributes on two problems that panel econometric studies on the economic effect of inequality have recently encountered: the comparability problem associated with the commonly used Deininger and Squire s Gini index, and the problem relating to the estimation of group-related elasticities in panel data. In this study, a simple way to 'bypass' vagueness related to the use of parametric methods to estimate group-related parameters is presented. The idea is to estimate the group-related elasticities implicitly using a set of group-related instrumental variables. The estimation results with new data and method indicate that the relationship between income inequality and growth is likely to be non-linear. Chapter three incorporates the EHII2.1 inequality measure and a panel with annual time series observations from 38 countries to test the existence of long-run equilibrium relation(s) between inequality and the level of GDP. Panel unit root tests indicate that both the logarithmic EHII2.1 inequality measure and the logarithmic GDP per capita series are I(1) nonstationary processes. They are also found to be cointegrated of order one, which implies that there is a long-run equilibrium relation between them. The long-run growth elasticity of inequality is found to be negative in the middle-income and rich economies, but the results for poor economies are inconclusive. In the fourth Chapter, macroeconomic data on nine developed economies spanning across four decades starting from the year 1960 is used to study the effect of the changes in the top income share to national and private savings. The income share of the top 1 % of population is used as proxy for the distribution of income. The effect of inequality on private savings is found to be positive in the Nordic and Central-European countries, but for the Anglo-Saxon countries the direction of the effect (positive vs. negative) remains somewhat ambiguous. Inequality is found to have an effect national savings only in the Nordic countries, where it is positive.