36 resultados para A1B scenario
em Helda - Digital Repository of University of Helsinki
Resumo:
Raportissa on arvioitu ilmastonmuutoksen vaikutusta Suomen maaperän talviaikaiseen jäätymiseen lämpösummien perusteella. Laskelmat kuvaavat roudan paksuutta nimenomaisesti lumettomilla alueilla, esimerkiksi teillä, joilta satanut lumi aurataan pois. Luonnossa lämpöä eristävän lumipeitteen alla routaa on ohuemmin kuin tällaisilla lumettomilla alueilla. Toisaalta luonnollisessa ympäristössä paikalliset erot korostuvat johtuen mm. maalajeista ja kasvillisuudesta. Roudan paksuudet laskettiin ensin perusjakson 1971–2000 ilmasto-oloissa talviaikaisten säähavaintotietoihin pohjautuvien lämpötilojen perusteella. Sen jälkeen laskelmat toistettiin kolmelle tulevalle ajanjaksolle (2010–2039, 2040–2069 ja 2070–2099) kohottamalla lämpötiloja ilmastonmuutosmallien ennustamalla tavalla. Laskelman pohjana käytettiin 19 ilmastomallin A1B-skenaarioajojen keskimäärin simuloimaa lämpötilan muutosta. Tulosten herkkyyden arvioimiseksi joitakin laskelmia tehtiin myös tätä selvästi heikompaa ja voimakkaampaa lämpenemisarviota käyttäen. A1B-skenaarion mukaisen lämpötilan nousun toteutuessa nykyisiä mallituloksia vastaavasti routakerros ohenee sadan vuoden aikana Pohjois-Suomessa 30–40 %, suuressa osassa maan keski- ja eteläosissa 50–70 %. Jo lähivuosikymmeninä roudan ennustetaan ohentuvan 10–30 %, saaristossa enemmän. Mikäli lämpeneminen toteutuisi voimakkaimman tarkastellun vaihtoehdon mukaisesti, roudan syvyys pienenisi tätäkin enemmän. Roudan paksuuden vuosienvälistä vaihtelua ja sen muuttumista tulevaisuudessa pyrittiin myös arvioimaan. Leutoina talvina routa ohenee enemmän kuin normaaleina tai ankarina pakkastalvina. Päivittäistä sään vaihtelua simuloineen säägeneraattorin tuottamassa aineistoissa esiintyi kuitenkin liian vähän hyvin alhaisia ja hyvin korkeita lämpötiloja. Siksi näitten lämpötilatietojen pohjalta laskettu roudan paksuuskin ilmeisesti vaihtelee liian vähän vuodesta toiseen. Kelirikkotilanteita voi esiintyä myös kesken routakauden, jos useamman päivän suojasää ja samanaikainen runsas vesisade pääsevät sulattamaan maata. Tällaiset routakauden aikana sattuvat säätilat näyttävätkin yleistyvän lähivuosikymmeninä. Vuosisadan loppua kohti ne sen sijaan maan eteläosissa jälleen vähenevät, koska routakausi lyhenee oleellisesti. Tulevia vuosikymmeniä koskevien ilmastonmuutosennusteiden ohella routaa ja kelirikon esiintymistä on periaatteessa mahdollista ennustaa myös lähiaikojen sääennusteita hyödyntäen. Pitkät, viikkojen tai kuukausien mittaiset sääennusteet eivät tosin ole ainakaan vielä erityisen luotettavia, mutta myös lyhyemmistä ennusteista voisi olla hyötyä mm. tienpitoa suunniteltaessa.
Resumo:
Agriculture is an economic activity that heavily relies on the availability of natural resources. Through its role in food production agriculture is a major factor affecting public welfare and health, and its indirect contribution to gross domestic product and employment is significant. Agriculture also contributes to numerous ecosystem services through management of rural areas. However, the environmental impact of agriculture is considerable and reaches far beyond the agroecosystems. The questions related to farming for food production are, thus, manifold and of great public concern. Improving environmental performance of agriculture and sustainability of food production, sustainabilizing food production, calls for application of wide range of expertise knowledge. This study falls within the field of agro-ecology, with interphases to food systems and sustainability research and exploits the methods typical of industrial ecology. The research in these fields extends from multidisciplinary to interdisciplinary and transdisciplinary, a holistic approach being the key tenet. The methods of industrial ecology have been applied extensively to explore the interaction between human economic activity and resource use. Specifically, the material flow approach (MFA) has established its position through application of systematic environmental and economic accounting statistics. However, very few studies have applied MFA specifically to agriculture. The MFA approach was used in this thesis in such a context in Finland. The focus of this study is the ecological sustainability of primary production. The aim was to explore the possibilities of assessing ecological sustainability of agriculture by using two different approaches. In the first approach the MFA-methods from industrial ecology were applied to agriculture, whereas the other is based on the food consumption scenarios. The two approaches were used in order to capture some of the impacts of dietary changes and of changes in production mode on the environment. The methods were applied at levels ranging from national to sector and local levels. Through the supply-demand approach, the viewpoint changed between that of food production to that of food consumption. The main data sources were official statistics complemented with published research results and expertise appraisals. MFA approach was used to define the system boundaries, to quantify the material flows and to construct eco-efficiency indicators for agriculture. The results were further elaborated for an input-output model that was used to analyse the food flux in Finland and to determine its relationship to the economy-wide physical and monetary flows. The methods based on food consumption scenarios were applied at regional and local level for assessing feasibility and environmental impacts of relocalising food production. The approach was also used for quantification and source allocation of greenhouse gas (GHG) emissions of primary production. GHG assessment provided, thus, a means of crosschecking the results obtained by using the two different approaches. MFA data as such or expressed as eco-efficiency indicators, are useful in describing the overall development. However, the data are not sufficiently detailed for identifying the hot spots of environmental sustainability. Eco-efficiency indicators should not be bluntly used in environmental assessment: the carrying capacity of the nature, the potential exhaustion of non-renewable natural resources and the possible rebound effect need also to be accounted for when striving towards improved eco-efficiency. The input-output model is suitable for nationwide economy analyses and it shows the distribution of monetary and material flows among the various sectors. Environmental impact can be captured only at a very general level in terms of total material requirement, gaseous emissions, energy consumption and agricultural land use. Improving environmental performance of food production requires more detailed and more local information. The approach based on food consumption scenarios can be applied at regional or local scales. Based on various diet options the method accounts for the feasibility of re-localising food production and environmental impacts of such re-localisation in terms of nutrient balances, gaseous emissions, agricultural energy consumption, agricultural land use and diversity of crop cultivation. The approach is applicable anywhere, but the calculation parameters need to be adjusted so as to comply with the specific circumstances. The food consumption scenario approach, thus, pays attention to the variability of production circumstances, and may provide some environmental information that is locally relevant. The approaches based on the input-output model and on food consumption scenarios represent small steps towards more holistic systemic thinking. However, neither one alone nor the two together provide sufficient information for sustainabilizing food production. Environmental performance of food production should be assessed together with the other criteria of sustainable food provisioning. This requires evaluation and integration of research results from many different disciplines in the context of a specified geographic area. Foodshed area that comprises both the rural hinterlands of food production and the population centres of food consumption is suggested to represent a suitable areal extent for such research. Finding a balance between the various aspects of sustainability is a matter of optimal trade-off. The balance cannot be universally determined, but the assessment methods and the actual measures depend on what the bottlenecks of sustainability are in the area concerned. These have to be agreed upon among the actors of the area
Resumo:
The objective of this paper is to improve option risk monitoring by examining the information content of implied volatility and by introducing the calculation of a single-sum expected risk exposure similar to the Value-at-Risk. The figure is calculated in two steps. First, there is a need to estimate the value of a portfolio of options for a number of different market scenarios, while the second step is to summarize the information content of the estimated scenarios into a single-sum risk measure. This involves the use of probability theory and return distributions, which confronts the user with the problems of non-normality in the return distribution of the underlying asset. Here the hyperbolic distribution is used to describe one alternative for dealing with heavy tails. Results indicate that the information content of implied volatility is useful when predicting future large returns in the underlying asset. Further, the hyperbolic distribution provides a good fit to historical returns enabling a more accurate definition of statistical intervals and extreme events.
Resumo:
The objective of this paper is to suggest a method that accounts for the impact of the volatility smile dynamics when performing scenario analysis for a portfolio consisting of vanilla options. As the volatility smile is documented to change at least with the level of implied at-the-money volatility, a suitable model is here included in the calculation process of the simulated market scenarios. By constructing simple portfolios of index options and comparing the ex ante risk exposure measured using different pricing methods to realized market values, ex post, the improvements of the incorporation of the model are monitored. The analyzed examples in the study generate results that statistically support that the most accurate scenarios are those calculated using the model accounting for the dynamics of the smile. Thus, we show that the differences emanating from the volatility smile are apparent and should be accounted for and that the methodology presented herein is one suitable alternative for doing so.
Resumo:
The study focuses on the potential roles of the brick making industries in Sudan in deforestation and greenhouse gas emission due to the consumption of biofuels. The results were based on the observation of 25 brick making industries from three administrative regions in Sudan namely, Khartoum, Kassala and Gezira. The methodological approach followed the procedures outlined by the Intergovernmental Panel on Climate Change (IPCC). For predicting a serious deforestation scenario, it was also assumed that all of wood use for this particular purpose is from unsustainable sources. The study revealed that the total annual quantity of fuelwood consumed by the surveyed brick making industries (25) was 2,381 t dm. Accordingly, the observed total potential deforested wood was 10,624 m3, in which the total deforested round wood was 3,664 m3 and deforested branches was 6,961 m3. The study observed that a total of 2,990 t biomass fuels (fuelwood and dung cake) consumed annually by the surveyed brick making industries for brick burning. Consequently, estimated total annual emissions of greenhouse gases were 4,832 t CO2, 21 t CH4, 184 t CO, 0.15 t N20, 5 t NOX and 3.5 t NO while the total carbon released in the atmosphere was 1,318 t. Altogether, the total annual greenhouse gases emissions from biomass fuels burning was 5,046 t; of which 4,104 t from fuelwood and 943 t from dung cake burning. According to the results, due to the consumption of fuelwood in the brick making industries (3,450 units) of Sudan, the amount of wood lost from the total growing stock of wood in forests and trees in Sudan annually would be 1,466,000 m3 encompassing 505,000 m3 round wood and 961,000 m3 branches annually. By considering all categories of biofuels (fuelwood and dung cake), it was estimated that, the total emissions from all the brick making industries of Sudan would be 663,000 t CO2, 2,900 t CH4, 25,300 t CO, 20 t N2O, 720 t NOX and 470 t NO per annum, while the total carbon released in the atmosphere would be 181,000 t annually.
Resumo:
The potato virus A (PVA) genome linked protein (VPg) is a multifunctional protein that takes part in vital infection cycle events such as replication and movement of the virus from cell to cell. VPg is attached to the 5´ end of the genome and is carried in the tip structure of the filamentous virus particle. VPg is also the last protein to be cleaved from the polyprotein. VPg interacts with several viral and host proteins and is phosphorylated at several positions. These features indicate a central role in virus epidemiology and a requirement for an efficient but flexible mechanism for switching between different functions. -- This study examines some of the key VPg functions in more detail. Mutations in the positively charged region from Ala38 to Lys44 affected the NTP binding, uridylylation, and in vitro translation inhibition activities of VPg, whereas in vivo translation inhibition was not affected. Some of the data generated in this study implicated the structural flexibility of the protein in functional activities. VPg lacks a rigid structure, which could allow it to adapt conformationally to different functions as needed. A major finding of this study is that PVA VPg belongs to the class of ´intrinsically disordered proteins´ (IDPs). IDPs are a novel protein class that has helped to explain the observed lack of structure. The existence of IDPs clearly shows that proteins can be functional and adapt a native fold without a rigid structure. Evidence for the intrinsic disorder of VPg was provided by CD spectroscopy, NMR, fluorescence spectroscopy, bioinformatic analysis, and limited proteolytic digestion. The structure of VPg resembles that of a molten globule-type protein and has a hydrophobic core domain. Approximately 50% of the protein is disordered and an α-helical stabilization of these regions has been hypothesized. Surprisingly, VPg structure was stabilized in the presence of anionic lipid vesicles. The stabilization was accompanied by a change in VPg structure and major morphological modifications of the vesicles, including a pronounced increase in the size and appearance of pore or plaque like formations on the vesicle surface. The most likely scenario seems to be an α-helical stabilization of VPg which induces formation of a pore or channel-like structure on the vesicle surface. The size increase is probably due to fusion or swelling of the vesicles. The latter hypothesis is supported by the evident disruption of the vesicles after prolonged incubation with VPg. A model describing the results is presented and discussed in relation to other known properties of the protein.
Resumo:
Eutrophication of the Baltic Sea is a serious problem. This thesis estimates the benefit to Finns from reduced eutrophication in the Gulf of Finland, the most eutrophied part of the Baltic Sea, by applying the choice experiment method, which belongs to the family of stated preference methods. Because stated preference methods have been subject to criticism, e.g., due to their hypothetical survey context, this thesis contributes to the discussion by studying two anomalies that may lead to biased welfare estimates: respondent uncertainty and preference discontinuity. The former refers to the difficulty of stating one s preferences for an environmental good in a hypothetical context. The latter implies a departure from the continuity assumption of conventional consumer theory, which forms the basis for the method and the analysis. In the three essays of the thesis, discrete choice data are analyzed with the multinomial logit and mixed logit models. On average, Finns are willing to contribute to the water quality improvement. The probability for willingness increases with residential or recreational contact with the gulf, higher than average income, younger than average age, and the absence of dependent children in the household. On average, for Finns the relatively most important characteristic of water quality is water clarity followed by the desire for fewer occurrences of blue-green algae. For future nutrient reduction scenarios, the annual mean household willingness to pay estimates range from 271 to 448 and the aggregate welfare estimates for Finns range from 28 billion to 54 billion euros, depending on the model and the intensity of the reduction. Out of the respondents (N=726), 72.1% state in a follow-up question that they are either Certain or Quite certain about their answer when choosing the preferred alternative in the experiment. Based on the analysis of other follow-up questions and another sample (N=307), 10.4% of the respondents are identified as potentially having discontinuous preferences. In relation to both anomalies, the respondent- and questionnaire-specific variables are found among the underlying causes and a departure from standard analysis may improve the model fit and the efficiency of estimates, depending on the chosen modeling approach. The introduction of uncertainty about the future state of the Gulf increases the acceptance of the valuation scenario which may indicate an increased credibility of a proposed scenario. In conclusion, modeling preference heterogeneity is an essential part of the analysis of discrete choice data. The results regarding uncertainty in stating one s preferences and non-standard choice behavior are promising: accounting for these anomalies in the analysis may improve the precision of the estimates of benefit from reduced eutrophication in the Gulf of Finland.
Resumo:
Climate change is the single biggest environmental problem in the world at the moment. Although the effects are still not fully understood and there is considerable amount of uncertainty, many na-tions have decided to mitigate the change. On the societal level, a planner who tries to find an eco-nomically optimal solution to an environmental pollution problem seeks to reduce pollution from the sources where reductions are most cost-effective. This study aims to find out how effective the instruments of the agricultural policy are in the case of climate change mitigation in Finland. The theoretical base of this study is the neoclassical economic theory that is based on the assumption of a rational economic agent who maximizes his own utility. This theoretical base has been widened towards the direction clearly essential to the matter: the theory of environmental eco-nomics. Deeply relevant to this problem and central in the theory of environmental economics are the concepts of externalities and public goods. What are also relevant are the problems of global pollution and non-point-source pollution. Econometric modelling was the method that was applied to this study. The Finnish part of the AGMEMOD-model, covering the whole EU, was used for the estimation of the development of pollution. This model is a seemingly recursive, partially dynamic partial-equilibrium model that was constructed to predict the development of Finnish agricultural production of the most important products. For the study, I personally updated the model and also widened its scope in some relevant matters. Also, I devised a table that can calculate the emissions of greenhouse gases according to the rules set by the IPCC. With the model I investigated five alternative scenarios in comparison to the base-line scenario of Agenda 2000 agricultural policy. The alternative scenarios were: 1) the CAP reform of 2003, 2) free trade on agricultural commodities, 3) technological change, 4) banning the cultivation of organic soils and 5) the combination of the last three scenarios as the maximal achievement in reduction. The maximal achievement in the alternative scenario 5 was 1/3 of the level achieved on the base-line scenario. CAP reform caused only a minor reduction when com-pared to the base-line scenario. Instead, the free trade scenario and the scenario of technological change alone caused a significant reduction. The biggest single reduction was achieved by banning the cultivation of organic land. However, this was also the most questionable scenario to be real-ized, the reasons for this are further elaborated in the paper. The maximal reduction that can be achieved in the Finnish agricultural sector is about 11 % of the emission reduction that is needed to comply with the Kyoto protocol.
Resumo:
This research discusses decoupling CAP (Common Agricultural Policy) support and impacts which may occur on grain cultivation area and supply of beef and pork in Finland. The study presents the definitions and studies on decoupled agricultural subsidies, the development of supply of grain, beef and pork in Finland and changes in leading factors affecting supply between 1970 and 2005. Decoupling agricultural subsidies means that the linkage between subsidies and production levels is disconnected; subsidies do not affect the amount produced. The hypothesis is that decoupling will decrease the amounts produced in agriculture substantially. In the supply research, the econometric models which represent supply of agricultural products are estimated based on the data of prices and amounts produced. With estimated supply models, the impacts of changes in prices and public policies, can be forecasted according to supply of agricultural products. In this study, three regression models describing combined cultivation areas of rye, wheat, oats and barley, and the supply of beef and pork are estimated. Grain cultivation area and supply of beef are estimated based on data from 1970 to 2005 and supply of pork on data from 1995 to 2005. The dependencies in the model are postulated to be linear. The explanatory variables in the grain model were average return per hectare, agricultural subsidies, grain cultivation area in the previous year and the cost of fertilization. The explanatory variables in the beef model were the total return from markets and subsidies and the amount of beef production in the previous year. In the pork model the explanatory variables were the total return, the price of piglet, investment subsidies, trend of increasing productivity and the dummy variable of the last quarter of the year. The R-squared of model of grain cultivation area was 0,81, the model of beef supply 0,77 and the model of pork supply 0,82. Development of grain cultivation area and supply of beef and pork was estimated for 2006 - 2013 with this regression model. In the basic scenario, development of explanatory variables in 2006 - 2013 was postulated to be the same as they used to be in average in 1995 - 2005. After the basic scenario the impacts of decoupling CAP subsidies and domestic subsidies on cultivation area and supply were simulated. According to the results of the decoupling CAP subsidies scenario, grain cultivation area decreases from 1,12 million hectares in 2005 to 1,0 million hectares in 2013 and supply of beef from 88,8 million kilos in 2005 to 67,7 million kilos in 2013. Decoupling domestic and investment subsidies will decrease the supply of pork from 194 million kilos in 2005 to 187 million kilos in 2006. By 2013 the supply of pork grows into 203 million kilos.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
Olkiluoto Island is situated in the northern Baltic Sea, near the southwestern coast of Finland, and is the proposed location of a spent nuclear fuel repository. This study examined Holocene palaeoseismicity in the Olkiluoto area and in the surrounding sea areas by computer simulations together with acoustic-seismic, sedimentological and dating methods. The most abundant rock type on the island is migmatic mica gneiss, intruded by tonalites, granodiorites and granites. The surrounding Baltic Sea seabed consists of Palaeoproterozoic crystalline bedrock, which is to a great extent covered by younger Mesoproterozoic sedimentary rocks. The area contains several ancient deep-seated fracture zones that divide it into bedrock blocks. The response of bedrock at the Olkiluoto site was modelled considering four future ice-age scenarios. Each scenario produced shear displacements of fractures with different times of occurrence and varying recovery rates. Generally, the larger the maximum ice load, the larger were the permanent shear displacements. For a basic case, the maximum shear displacements were a few centimetres at the proposed nuclear waste repository level, at proximately 500 m b.s.l. High-resolution, low-frequency echo-sounding was used to examine the Holocene submarine sedimentary structures and possible direct and indirect indicators of palaeoseismic activity in the northern Baltic Sea. Echo-sounding profiles of Holocene submarine sediments revealed slides and slumps, normal faults, debris flows and turbidite-type structures. The profiles also showed pockmarks and other structures related to gas or groundwater seepages, which might be related to fracture zone activation. Evidence of postglacial reactivation in the study area was derived from the spatial occurrence of some of the structures, especial the faults and the seepages, in the vicinity of some old bedrock fracture zones. Palaeoseismic event(s) (a single or several events) in the Olkiluoto area were dated and the palaeoenvironment was characterized using palaeomagnetic, biostratigraphical and lithostratigraphical methods, enhancing the reliability of the chronology. Combined lithostratigraphy, biostratigraphy and palaeomagnetic stratigraphy revealed an age estimation of 10 650 to 10 200 cal. years BP for the palaeoseismic event(s). All Holocene sediment faults in the northern Baltic Sea occur at the same stratigraphical level, the age of which is estimated at 10 700 cal. years BP (9500 radiocarbon years BP). Their movement is suggested to have been triggered by palaeoseismic event(s) when the Late Weichselian ice sheet was retreating from the site and bedrock stresses were released along the bedrock fracture zones. Since no younger or repeated traces of seismic events were found, it corroborates the suggestion that the major seismic activity occurred within a short time during and after the last deglaciation. The origin of the gas/groundwater seepages remains unclear. Their reflections in the echo-sounding profiles imply that part of the gas is derived from the organic-bearing Litorina and modern gyttja clays. However, at least some of the gas is derived from the bedrock. Additional information could be gained by pore water analysis from the pockmarks. Information on postglacial fault activation and possible gas and/or fluid discharges under high hydraulic heads has relevance in evaluating the safety assessment of a planned spent nuclear fuel repository in the region.
Resumo:
Frictions are factors that hinder trading of securities in financial markets. Typical frictions include limited market depth, transaction costs, lack of infinite divisibility of securities, and taxes. Conventional models used in mathematical finance often gloss over these issues, which affect almost all financial markets, by arguing that the impact of frictions is negligible and, consequently, the frictionless models are valid approximations. This dissertation consists of three research papers, which are related to the study of the validity of such approximations in two distinct modeling problems. Models of price dynamics that are based on diffusion processes, i.e., continuous strong Markov processes, are widely used in the frictionless scenario. The first paper establishes that diffusion models can indeed be understood as approximations of price dynamics in markets with frictions. This is achieved by introducing an agent-based model of a financial market where finitely many agents trade a financial security, the price of which evolves according to price impacts generated by trades. It is shown that, if the number of agents is large, then under certain assumptions the price process of security, which is a pure-jump process, can be approximated by a one-dimensional diffusion process. In a slightly extended model, in which agents may exhibit herd behavior, the approximating diffusion model turns out to be a stochastic volatility model. Finally, it is shown that when agents' tendency to herd is strong, logarithmic returns in the approximating stochastic volatility model are heavy-tailed. The remaining papers are related to no-arbitrage criteria and superhedging in continuous-time option pricing models under small-transaction-cost asymptotics. Guasoni, Rásonyi, and Schachermayer have recently shown that, in such a setting, any financial security admits no arbitrage opportunities and there exist no feasible superhedging strategies for European call and put options written on it, as long as its price process is continuous and has the so-called conditional full support (CFS) property. Motivated by this result, CFS is established for certain stochastic integrals and a subclass of Brownian semistationary processes in the two papers. As a consequence, a wide range of possibly non-Markovian local and stochastic volatility models have the CFS property.
Resumo:
The ever expanding growth of the wireless access to the Internet in recent years has led to the proliferation of wireless and mobile devices to connect to the Internet. This has created the possibility of mobile devices equipped with multiple radio interfaces to connect to the Internet using any of several wireless access network technologies such as GPRS, WLAN and WiMAX in order to get the connectivity best suited for the application. These access networks are highly heterogeneous and they vary widely in their characteristics such as bandwidth, propagation delay and geographical coverage. The mechanism by which a mobile device switches between these access networks during an ongoing connection is referred to as vertical handoff and it often results in an abrupt and significant change in the access link characteristics. The most common Internet applications such as Web browsing and e-mail make use of the Transmission Control Protocol (TCP) as their transport protocol and the behaviour of TCP depends on the end-to-end path characteristics such as bandwidth and round-trip time (RTT). As the wireless access link is most likely the bottleneck of a TCP end-to-end path, the abrupt changes in the link characteristics due to a vertical handoff may affect TCP behaviour adversely degrading the performance of the application. The focus of this thesis is to study the effect of a vertical handoff on TCP behaviour and to propose algorithms that improve the handoff behaviour of TCP using cross-layer information about the changes in the access link characteristics. We begin this study by identifying the various problems of TCP due to a vertical handoff based on extensive simulation experiments. We use this study as a basis to develop cross-layer assisted TCP algorithms in handoff scenarios involving GPRS and WLAN access networks. We then extend the scope of the study by developing cross-layer assisted TCP algorithms in a broader context applicable to a wide range of bandwidth and delay changes during a handoff. And finally, the algorithms developed here are shown to be easily extendable to the multiple-TCP flow scenario. We evaluate the proposed algorithms by comparison with standard TCP (TCP SACK) and show that the proposed algorithms are effective in improving TCP behavior in vertical handoff involving a wide range of bandwidth and delay of the access networks. Our algorithms are easy to implement in real systems and they involve modifications to the TCP sender algorithm only. The proposed algorithms are conservative in nature and they do not adversely affect the performance of TCP in the absence of cross-layer information.
Resumo:
Wind power has grown fast internationally. It can reduce the environmental impact of energy production and increase energy security. Finland has turbine industry but wind electricity production has been slow, and nationally set capacity targets have not been met. I explored social factors that have affected the slow development of wind power in Finland by studying the perceptions of Finnish national level wind power actors. By that I refer to people who affect the development of wind power sector, such as officials, politicians, and representatives of wind industries and various organisations. The material consisted of interviews, a questionnaire, and written sources. The perceptions of wind power, its future, and methods to promote it were divided. They were studied through discourse analysis, content analysis, and scenario construction. Definition struggles affect views of the significance and potential of wind power in Finland, and also affect investments in wind power and wind power policy choices. Views of the future were demonstrated through scenarios. The views included scenarios of fast growth, but in the most pessimistic views, wind power was not thought to be competitive without support measures even in 2025, and the wind power capacity was correspondingly low. In such a scenario, policy tool choices were expected to remain similar to ones in use at the time of the interviews. So far, the development in Finland has followed closely this pessimistic scenario. Despite the scepticism about wind electricity production, wind turbine industry was seen as a credible industry. For many wind power actors as well as for the Finnish wind power policy, the turbine industry is a significant motive to promote wind power. Domestic electricity production and the export turbine industry are linked in discourse through so-called home market argumentation. Finnish policy tools have included subsidies, research and development funding, and information policies. The criteria used to evaluate policy measures were both process-oriented and value-based. Feed-in tariffs and green certificates that are common elsewhere have not been taken to use in Finland. Some interviewees considered such tools unsuitable for free electricity markets and for the Finnish policy style, dictatorial, and being against western values. Other interviewees supported their use because of their effectiveness. The current Finnish policy tools are not sufficiently effective to increase wind power production significantly. Marginalisation of wind power in discourses, pessimistic views of the future, and the view that the small consumer demand for wind electricity represents the political views of citizens towards promoting wind power, make it more difficult to take stronger policy measures to use. Wind power has not yet significantly contributed to the ecological modernisation of the energy sector in Finland, but the situation may change as the need to reduce emissions from energy production continues.
Resumo:
The Antarctic system comprises of the continent itself, Antarctica, and the ocean surrounding it, the Southern Ocean. The system has an important part in the global climate due to its size, its high latitude location and the negative radiation balance of its large ice sheets. Antarctica has also been in focus for several decades due to increased ultraviolet (UV) levels caused by stratospheric ozone depletion, and the disintegration of its ice shelves. In this study, measurements were made during three Austral summers to study the optical properties of the Antarctic system and to produce radiation information for additional modeling studies. These are related to specific phenomena found in the system. During the summer of 1997-1998, measurements of beam absorption and beam attenuation coefficients, and downwelling and upwelling irradiance were made in the Southern Ocean along a S-N transect at 6°E. The attenuation of photosynthetically active radiation (PAR) was calculated and used together with hydrographic measurements to judge whether the phytoplankton in the investigated areas of the Southern Ocean are light limited. By using the Kirk formula the diffuse attenuation coefficient was linked to the absorption and scattering coefficients. The diffuse attenuation coefficients (Kpar) for PAR were found to vary between 0.03 and 0.09 1/m. Using the values for KPAR and the definition of the Sverdrup critical depth, the studied Southern Ocean plankton systems were found not to be light limited. Variabilities in the spectral and total albedo of snow were studied in the Queen Maud Land region of Antarctica during the summers of 1999-2000 and 2000-2001. The measurement areas were the vicinity of the South African Antarctic research station SANAE 4, and a traverse near the Finnish Antarctic research station Aboa. The midday mean total albedos for snow were between 0.83, for clear skies, and 0.86, for overcast skies, at Aboa and between 0.81 and 0.83 for SANAE 4. The mean spectral albedo levels at Aboa and SANAE 4 were very close to each other. The variations in the spectral albedos were due more to differences in ambient conditions than variations in snow properties. A Monte-Carlo model was developed to study the spectral albedo and to develop a novel nondestructive method to measure the diffuse attenuation coefficient of snow. The method was based on the decay of upwelling radiation moving horizontally away from a source of downwelling light. This was assumed to have a relation to the diffuse attenuation coefficient. In the model, the attenuation coefficient obtained from the upwelling irradiance was higher than that obtained using vertical profiles of downwelling irradiance. The model results were compared to field measurements made on dry snow in Finnish Lapland and they correlated reasonably well. Low-elevation (below 1000 m) blue-ice areas may experience substantial melt-freeze cycles due to absorbed solar radiation and the small heat conductivity in the ice. A two-dimensional (x-z) model has been developed to simulate the formation and water circulation in the subsurface ponds. The model results show that for a physically reasonable parameter set the formation of liquid water within the ice can be reproduced. The results however are sensitive to the chosen parameter values, and their exact values are not well known. Vertical convection and a weak overturning circulation is generated stratifying the fluid and transporting warmer water downward, thereby causing additional melting at the base of the pond. In a 50-year integration, a global warming scenario mimicked by a decadal scale increase of 3 degrees per 100 years in air temperature, leads to a general increase in subsurface water volume. The ice did not disintegrate due to the air temperature increase after the 50 year integration.