16 resultados para Minkowski Sum of Sets

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effect of temperature on height growth of Scots pine in the northern boreal zone in Lapland was studied in two different time scales. Intra-annual growth was monitored in four stands in up to four growing seasons using an approximately biweekly measurement interval. Inter-annual growth was studied using growth records representing seven stands and five geographical locations. All the stands were growing on a dry to semi-dry heath that is a typical site type for pine stands in Finland. The applied methodology is based on applied time-series analysis and multilevel modelling. Intra-annual elongation of the leader shoot correlated with temperature sum accumulation. Height growth ceased when, on average, 41% of the relative temperature sum of the site was achieved (observed minimum and maximum were 38% and 43%). The relative temperature sum was calculated by dividing the actual temperature sum by the long-term mean of the total annual temperature sum for the site. Our results suggest that annual height growth ceases when a location-specific temperature sum threshold is attained. The positive effect of the mean July temperature of the previous year on annual height increment proved to be very strong at high latitudes. The mean November temperature of the year before the previous had a statistically significantly effect on height increment in the three northernmost stands. The effect of mean monthly precipitation on annual height growth was statistically insignificant. There was a non-linear dependence between length and needle density of annual shoots. Exceptionally low height growth results in high needle-density, but the effect is weaker in years of average or good height growth. Radial growth and next year s height growth are both largely controlled by current July temperature. Nevertheless, their growth variation in terms of minimum and maximum is not necessarily strongly correlated. This is partly because height growth is more sensitive to changes in temperature. In addition, the actual effective temperature period is not exactly the same for these two growth components. Yet, there is a long-term balance that was also statistically distinguishable; radial growth correlated significantly with height growth with a lag of 2 years. Temperature periods shorter than a month are more effective variables than mean monthly values, but the improvement is on the scale of modest to good when applying Julian days or growing-degree-days as pointers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Achieving sustainable consumption patterns is a crucial step on the way towards sustainability. The scientific knowledge used to decide which priorities to set and how to enforce them has to converge with societal, political, and economic initiatives on various levels: from individual household decision-making to agreements and commitments in global policy processes. The aim of this thesis is to draw a comprehensive and systematic picture of sustainable consumption and to do this it develops the concept of Strong Sustainable Consumption Governance. In this concept, consumption is understood as resource consumption. This includes consumption by industries, public consumption, and household consumption. Next to the availability of resources (including the available sink capacity of the ecosystem) and their use and distribution among the Earth’s population, the thesis also considers their contribution to human well-being. This implies giving specific attention to the levels and patterns of consumption. Methods: The thesis introduces the terminology and various concepts of Sustainable Consumption and of Governance. It briefly elaborates on the methodology of Critical Realism and its potential for analysing Sustainable Consumption. It describes the various methods on which the research is based and sets out the political implications a governance approach towards Strong Sustainable Consumption may have. Two models are developed: one for the assessment of the environmental relevance of consumption activities, another to identify the influences of globalisation on the determinants of consumption opportunities. Results: One of the major challenges for Strong Sustainable Consumption is that it is not in line with the current political mainstream: that is, the belief that economic growth can cure all our problems. So, the proponents have to battle against a strong headwind. Their motivation however is the conviction that there is no alternative. Efforts have to be taken on multiple levels by multiple actors. And all of them are needed as they constitute the individual strings that together make up the rope. However, everyone must ensure that they are pulling in the same direction. It might be useful to apply a carrot and stick strategy to stimulate public debate. The stick in this case is to create a sense of urgency. The carrot would be to articulate better the message to the public that a shrinking of the economy is not as much of a disaster as mainstream economics tends to suggest. In parallel to this it is necessary to demand that governments take responsibility for governance. The dominant strategy is still information provision. But there is ample evidence that hard policies like regulatory instruments and economic instruments are most effective. As for Civil Society Organizations it is recommended that they overcome the habit of promoting Sustainable (in fact green) Consumption by using marketing strategies and instead foster public debate in values and well-being. This includes appreciating the potential of social innovation. A countless number of such initiatives are on the way but their potential is still insufficiently explored. Beyond the question of how to multiply such approaches, it is also necessary to establish political macro structures to foster them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many problems in analysis have been solved using the theory of Hodge structures. P. Deligne started to treat these structures in a categorical way. Following him, we introduce the categories of mixed real and complex Hodge structures. Category of mixed Hodge structures over the field of real or complex numbers is a rigid abelian tensor category, and in fact, a neutral Tannakian category. Therefore it is equivalent to the category of representations of an affine group scheme. The direct sums of pure Hodge structures of different weights over real or complex numbers can be realized as a representation of the torus group, whose complex points is the Cartesian product of two punctured complex planes. Mixed Hodge structures turn out to consist of information of a direct sum of pure Hodge structures of different weights and a nilpotent automorphism. Therefore mixed Hodge structures correspond to the representations of certain semidirect product of a nilpotent group and the torus group acting on it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Volatile organic compounds (VOCs) affect atmospheric chemistry and thereafter also participate in the climate change in many ways. The long-lived greenhouse gases and tropospheric ozone are the most important radiative forcing components warming the climate, while aerosols are the most important cooling component. VOCs can have warming effects on the climate: they participate in tropospheric ozone formation and compete for oxidants with the greenhouse gases thus, for example, lengthening the atmospheric lifetime of methane. Some VOCs, on the other hand, cool the atmosphere by taking part in the formation of aerosol particles. Some VOCs, in addition, have direct health effects, such as carcinogenic benzene. VOCs are emitted into the atmosphere in various processes. Primary emissions of VOC include biogenic emissions from vegetation, biomass burning and human activities. VOCs are also produced in secondary emissions from the reactions of other organic compounds. Globally, forests are the largest source of VOC entering the atmosphere. This thesis focuses on the measurement results of emissions and concentrations of VOCs in one of the largest vegetation zones in the world, the boreal zone. An automated sampling system was designed and built for continuous VOC concentration and emission measurements with a proton transfer reaction - mass spectrometer (PTR-MS). The system measured one hour at a time in three-hourly cycles: 1) ambient volume mixing-ratios of VOCs in the Scots-pine-dominated boreal forest, 2) VOC fluxes above the canopy, and 3) VOC emissions from Scots pine shoots. In addition to the online PTR-MS measurements, we determined the composition and seasonality of the VOC emissions from a Siberian larch with adsorbent samples and GC-MS analysis. The VOC emissions from Siberian larch were reported for the fist time in the literature. The VOC emissions were 90% monoterpenes (mainly sabinene) and the rest sesquiterpenes (mainly a-farnesene). The normalized monoterpene emission potentials were highest in late summer, rising again in late autumn. The normalized sesquiterpene emission potentials were also highest in late summer, but decreased towards the autumn. The emissions of mono- and sesquiterpenes from the deciduous Siberian larch, as well as the emissions of monoterpenes measured from the evergreen Scots pine, were well described by the temperature-dependent algorithm. In the Scots-pine-dominated forest, canopy-scale emissions of monoterpenes and oxygenated VOCs (OVOCs) were of the same magnitude. Methanol and acetone were the most abundant OVOCs emitted from the forest and also in the ambient air. Annually, methanol and mixing ratios were of the order of 1 ppbv. The monoterpene and sum of isoprene 2-methyl-3-buten-2-ol (MBO) volume mixing-ratios were an order of magnitude lower. The majority of the monoterpene and methanol emissions from the Scots-pinedominated forest were explained by emissions from Scots pine shoots. The VOCs were divided into three classes based on the dynamics of the summer-time concentrations: 1) reactive compounds with local biological, anthropogenic or chemical sources (methanol, acetone, butanol and hexanal), 2) compounds whose emissions are only temperaturedependent (monoterpenes), 3) long-lived compounds (benzene, acetaldehyde). Biogenic VOC (methanol, acetone, isoprene MBO and monoterpene) volume mixing-ratios had clear diurnal patterns during summer. The ambient mixing ratios of other VOCs did not show this behaviour. During winter we did not observe systematical diurnal cycles for any of the VOCs. Different sources, removal processes and turbulent mixing explained the dynamics of the measured mixing-ratios qualitatively. However, quantitative understanding will require longterm emission measurements of the OVOCs and the use of comprehensive chemistry models. Keywords: Hydrocarbons, VOC, fluxes, volume mixing-ratio, boreal forest

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a measurement of the mass of the top quark using data corresponding to an integrated luminosity of 1.9fb^-1 of ppbar collisions collected at sqrt{s}=1.96 TeV with the CDF II detector at Fermilab's Tevatron. This is the first measurement of the top quark mass using top-antitop pair candidate events in the lepton + jets and dilepton decay channels simultaneously. We reconstruct two observables in each channel and use a non-parametric kernel density estimation technique to derive two-dimensional probability density functions from simulated signal and background samples. The observables are the top quark mass and the invariant mass of two jets from the W decay in the lepton + jets channel, and the top quark mass and the scalar sum of transverse energy of the event in the dilepton channel. We perform a simultaneous fit for the top quark mass and the jet energy scale, which is constrained in situ by the hadronic W boson mass. Using 332 lepton + jets candidate events and 144 dilepton candidate events, we measure the top quark mass to be mtop=171.9 +/- 1.7 (stat. + JES) +/- 1.1 (syst.) GeV/c^2 = 171.9 +/- 2.0 GeV/c^2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a measurement of the top quark mass with t-tbar dilepton events produced in p-pbar collisions at the Fermilab Tevatron $\sqrt{s}$=1.96 TeV and collected by the CDF II detector. A sample of 328 events with a charged electron or muon and an isolated track, corresponding to an integrated luminosity of 2.9 fb$^{-1}$, are selected as t-tbar candidates. To account for the unconstrained event kinematics, we scan over the phase space of the azimuthal angles ($\phi_{\nu_1},\phi_{\nu_2}$) of neutrinos and reconstruct the top quark mass for each $\phi_{\nu_1},\phi_{\nu_2}$ pair by minimizing a $\chi^2$ function in the t-tbar dilepton hypothesis. We assign $\chi^2$-dependent weights to the solutions in order to build a preferred mass for each event. Preferred mass distributions (templates) are built from simulated t-tbar and background events, and parameterized in order to provide continuous probability density functions. A likelihood fit to the mass distribution in data as a weighted sum of signal and background probability density functions gives a top quark mass of $165.5^{+{3.4}}_{-{3.3}}$(stat.)$\pm 3.1$(syst.) GeV/$c^2$.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two methods of pre-harvest inventory were designed and tested on three cutting sites containing a total of 197 500 m3 of wood. These sites were located on flat-ground boreal forests located in northwestern Quebec. Both methods studied involved scaling of trees harvested to clear the road path one year (or more) prior to harvest of adjacent cut-blocks. The first method (ROAD) considers the total road right-of-way volume divided by the total road area cleared. The resulting volume per hectare is then multiplied by the total cut-block area scheduled for harvest during the following year to obtain the total estimated cutting volume. The second method (STRATIFIED) also involves scaling of trees cleared from the road. However, in STRATIFIED, log scaling data are stratified by forest stand location. A volume per hectare is calculated for each stretch of road that crosses a single forest stand. This volume per hectare is then multiplied by the remaining area of the same forest stand scheduled for harvest one year later. The sum of all resulting estimated volumes per stand gives the total estimated cutting-volume for all cut-blocks adjacent to the studied road. A third method (MNR) was also used to estimate cut-volumes of the sites studied. This method represents the actual existing technique for estimating cutting volume in the province of Quebec. It involves summing the cut volume for all forest stands. The cut volume is estimated by multiplying the area of each stand by its estimated volume per hectare obtained from standard stock tables provided by the governement. The resulting total estimated volume per cut-block for all three methods was then compared with the actual measured cut-block volume (MEASURED). This analysis revealed a significant difference between MEASURED and MNR methods with the MNR volume estimate being 30 % higher than MEASURED. However, no significant difference from MEASURED was observed for volume estimates for the ROAD and STRATIFIED methods which respectively had estimated cutting volumes 19 % and 5 % lower than MEASURED. Thus the ROAD and STRATIFIED methods are good ways to estimate cut-block volumes after road right-of-way harvest for conditions similar to those examined in this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dissertation describes the conscription of Finnish soldiers into the Swedish army during the Thirty Years' War. The work concentrates on so-called substitute soldiers, who were hired for conscription by wealthier peasants, who thus avoided the draft. The substitutes were the largest group recruited by the Swedish army in Sweden. The substitutes made up approximately 25-80% of the total number of soldiers. They recieved a significant sum of money from the peasants: about 50-250 Swedish copper dalers, corresponding to the price of a little peasant house. The practice of using substitutes was managed by the local village council. The recruits were normally from the landless population. However, when there was an urgent need of men, even the yeoman had to leave their homes for the distant garrisons across the Baltic. Conscription and its devastating effect on agricultural production also reduced the flow of state revenues. One of the tasks of the dissertation is the correlation between the custom of using substitutes and the abandonment of farmsteds (= in to the first place, to the non-ability to pay taxes). In areas where there were no substitutes available the peasants had to join the army themselves, which normally led to abandonment and financial ruin because agricultural production was based on physical labour. This led to rise of large farms at the cost of smaller ones. Hence, the system of substitutes was a factor that transformed the mode of settlement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Disorders resulting from degenerative changes in the nervous system are progressive and incurable. Both environmental and inherited factors affect neuron function, and neurodegenerative diseases are often the sum of both factors. The cellular events leading to neuronal death are still mostly unknown. Monogenic diseases can offer a model for studying the mechanisms of neurodegeneration. Neuronal ceroid lipofuscinoses, or NCLs, are a group of monogenic, recessively inherited diseases affecting mostly children. NCLs cause severe and specific loss of neurons in the central nervous system, resulting in the deterioration of motor and mental skills and leading to premature death. In this thesis, the focus has been on two forms of NCL, the infantile NCL (INCL, CLN1) and the Finnish variant of late infantile NCL (vLINCLFin, CLN5). INCL is caused by mutations in the CLN1 gene encoding for the PPT1 (palmitoyl protein thioesterase 1) enzyme. PPT1 removes a palmitate moiety from proteins in experimental conditions, but its substrates in vivo are not known. In the Finnish variant of late infantile NCL (vLINCLFin), the CLN5 gene is defective, but the function of the encoded CLN5 has remained unknown. The aim of this thesis was to elucidate the disease mechanisms of these two NCL diseases by focusing on the molecular interactions of the defective proteins. In this work, the first interaction partner for PPT1, the mitochondrial F1-ATP synthase, was described. This protein has been linked to HDL metabolism in addition to its well-known role in the mitochondrial energy production. The connection between PPT1 and the F1-ATP synthase was studied utilizing the INCL-disease model, the genetically modified Ppt1-deficient mice. The levels of F1-ATP synthase subunits were increased on the surface of Ppt1-deficient neurons when compared to controls. We also detected several changes in lipid metabolism both at the cellular and systemic levels in Ppt1-deficient mice when compared to controls. The interactions between different NCL proteins were also elucidated. We were able to detect novel interactions between CLN5 and other NCL proteins, and to replicate the previously reported interactions. Some of the novel interactions influenced the intracellular trafficking of the proteins. The multiple interactions between CLN5 and other NCL proteins suggest a connection between the NCL subtypes at the cellular level. The main results of this thesis elicit information about the neuronal function of PPT1. The connection between INCL and neuronal lipid metabolism introduces a new perspective to this rather poorly characterized subject. The evidence of the interactions between NCL proteins provides the basis for future research trying to untangle the NCL disease mechanisms and to develop strategies for therapies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of recovering information from measurement data has already been studied for a long time. In the beginning, the methods were mostly empirical, but already towards the end of the sixties Backus and Gilbert started the development of mathematical methods for the interpretation of geophysical data. The problem of recovering information about a physical phenomenon from measurement data is an inverse problem. Throughout this work, the statistical inversion method is used to obtain a solution. Assuming that the measurement vector is a realization of fractional Brownian motion, the goal is to retrieve the amplitude and the Hurst parameter. We prove that under some conditions, the solution of the discretized problem coincides with the solution of the corresponding continuous problem as the number of observations tends to infinity. The measurement data is usually noisy, and we assume the data to be the sum of two vectors: the trend and the noise. Both vectors are supposed to be realizations of fractional Brownian motions, and the goal is to retrieve their parameters using the statistical inversion method. We prove a partial uniqueness of the solution. Moreover, with the support of numerical simulations, we show that in certain cases the solution is reliable and the reconstruction of the trend vector is quite accurate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis consists of three articles on Orlicz-Sobolev capacities. Capacity is a set function which gives information of the size of sets. Capacity is useful concept in the study of partial differential equations, and generalizations of exponential-type inequalities and Lebesgue point theory, and other topics related to weakly differentiable functions such as functions belonging to some Sobolev space or Orlicz-Sobolev space. In this thesis it is assumed that the defining function of the Orlicz-Sobolev space, the Young function, satisfies certain growth conditions. In the first article, the null sets of two different versions of Orlicz-Sobolev capacity are studied. Sufficient conditions are given so that these two versions of capacity have the same null sets. The importance of having information about null sets lies in the fact that the sets of capacity zero play similar role in the Orlicz-Sobolev space setting as the sets of measure zero do in the Lebesgue space and Orlicz space setting. The second article continues the work of the first article. In this article, it is shown that if a Young function satisfies certain conditions, then two versions of Orlicz-Sobolev capacity have the same null sets for its complementary Young function. In the third article the metric properties of Orlicz-Sobolev capacities are studied. It is usually difficult or impossible to calculate a capacity of a set. In applications it is often useful to have estimates for the Orlicz-Sobolev capacities of balls. Such estimates are obtained in this paper, when the Young function satisfies some growth conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The magnetic field of the Earth is 99 % of the internal origin and generated in the outer liquid core by the dynamo principle. In the 19th century, Carl Friedrich Gauss proved that the field can be described by a sum of spherical harmonic terms. Presently, this theory is the basis of e.g. IGRF models (International Geomagnetic Reference Field), which are the most accurate description available for the geomagnetic field. In average, dipole forms 3/4 and non-dipolar terms 1/4 of the instantaneous field, but the temporal mean of the field is assumed to be a pure geocentric axial dipolar field. The validity of this GAD (Geocentric Axial Dipole) hypothesis has been estimated by using several methods. In this work, the testing rests on the frequency dependence of inclination with respect to latitude. Each combination of dipole (GAD), quadrupole (G2) and octupole (G3) produces a distinct inclination distribution. These theoretical distributions have been compared with those calculated from empirical observations from different continents, and last, from the entire globe. Only data from Precambrian rocks (over 542 million years old) has been used in this work. The basic assumption is that during the long-term course of drifting continents, the globe is sampled adequately. There were 2823 observations altogether in the paleomagnetic database of the University of Helsinki. The effect of the quality of observations, as well as the age and rocktype, has been tested. For comparison between theoretical and empirical distributions, chi-square testing has been applied. In addition, spatiotemporal binning has effectively been used to remove the errors caused by multiple observations. The modelling from igneous rock data tells that the average magnetic field of the Earth is best described by a combination of a geocentric dipole and a very weak octupole (less than 10 % of GAD). Filtering and binning gave distributions a more GAD-like appearance, but deviation from GAD increased as a function of the age of rocks. The distribution calculated from so called keypoles, the most reliable determinations, behaves almost like GAD, having a zero quadrupole and an octupole 1 % of GAD. In no earlier study, past-400-Ma rocks have given a result so close to GAD, but low inclinations have been prominent especially in the sedimentary data. Despite these results, a greater deal of high-quality data and a proof of the long-term randomness of the Earth's continental motions are needed to make sure the dipole model holds true.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Raportissa on arvioitu ilmastonmuutoksen vaikutusta Suomen maaperän talviaikaiseen jäätymiseen lämpösummien perusteella. Laskelmat kuvaavat roudan paksuutta nimenomaisesti lumettomilla alueilla, esimerkiksi teillä, joilta satanut lumi aurataan pois. Luonnossa lämpöä eristävän lumipeitteen alla routaa on ohuemmin kuin tällaisilla lumettomilla alueilla. Toisaalta luonnollisessa ympäristössä paikalliset erot korostuvat johtuen mm. maalajeista ja kasvillisuudesta. Roudan paksuudet laskettiin ensin perusjakson 1971–2000 ilmasto-oloissa talviaikaisten säähavaintotietoihin pohjautuvien lämpötilojen perusteella. Sen jälkeen laskelmat toistettiin kolmelle tulevalle ajanjaksolle (2010–2039, 2040–2069 ja 2070–2099) kohottamalla lämpötiloja ilmastonmuutosmallien ennustamalla tavalla. Laskelman pohjana käytettiin 19 ilmastomallin A1B-skenaarioajojen keskimäärin simuloimaa lämpötilan muutosta. Tulosten herkkyyden arvioimiseksi joitakin laskelmia tehtiin myös tätä selvästi heikompaa ja voimakkaampaa lämpenemisarviota käyttäen. A1B-skenaarion mukaisen lämpötilan nousun toteutuessa nykyisiä mallituloksia vastaavasti routakerros ohenee sadan vuoden aikana Pohjois-Suomessa 30–40 %, suuressa osassa maan keski- ja eteläosissa 50–70 %. Jo lähivuosikymmeninä roudan ennustetaan ohentuvan 10–30 %, saaristossa enemmän. Mikäli lämpeneminen toteutuisi voimakkaimman tarkastellun vaihtoehdon mukaisesti, roudan syvyys pienenisi tätäkin enemmän. Roudan paksuuden vuosienvälistä vaihtelua ja sen muuttumista tulevaisuudessa pyrittiin myös arvioimaan. Leutoina talvina routa ohenee enemmän kuin normaaleina tai ankarina pakkastalvina. Päivittäistä sään vaihtelua simuloineen säägeneraattorin tuottamassa aineistoissa esiintyi kuitenkin liian vähän hyvin alhaisia ja hyvin korkeita lämpötiloja. Siksi näitten lämpötilatietojen pohjalta laskettu roudan paksuuskin ilmeisesti vaihtelee liian vähän vuodesta toiseen. Kelirikkotilanteita voi esiintyä myös kesken routakauden, jos useamman päivän suojasää ja samanaikainen runsas vesisade pääsevät sulattamaan maata. Tällaiset routakauden aikana sattuvat säätilat näyttävätkin yleistyvän lähivuosikymmeninä. Vuosisadan loppua kohti ne sen sijaan maan eteläosissa jälleen vähenevät, koska routakausi lyhenee oleellisesti. Tulevia vuosikymmeniä koskevien ilmastonmuutosennusteiden ohella routaa ja kelirikon esiintymistä on periaatteessa mahdollista ennustaa myös lähiaikojen sääennusteita hyödyntäen. Pitkät, viikkojen tai kuukausien mittaiset sääennusteet eivät tosin ole ainakaan vielä erityisen luotettavia, mutta myös lyhyemmistä ennusteista voisi olla hyötyä mm. tienpitoa suunniteltaessa.