18 resultados para uncertain polynomials

em Helda - Digital Repository of University of Helsinki


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study I discuss G. W. Leibniz's (1646-1716) views on rational decision-making from the standpoint of both God and man. The Divine decision takes place within creation, as God freely chooses the best from an infinite number of possible worlds. While God's choice is based on absolutely certain knowledge, human decisions on practical matters are mostly based on uncertain knowledge. However, in many respects they could be regarded as analogous in more complicated situations. In addition to giving an overview of the divine decision-making and discussing critically the criteria God favours in his choice, I provide an account of Leibniz's views on human deliberation, which includes some new ideas. One of these concerns is the importance of estimating probabilities in making decisions one estimates both the goodness of the act itself and its consequences as far as the desired good is concerned. Another idea is related to the plurality of goods in complicated decisions and the competition this may provoke. Thirdly, heuristic models are used to sketch situations under deliberation in order to help in making the decision. Combining the views of Marcelo Dascal, Jaakko Hintikka and Simo Knuuttila, I argue that Leibniz applied two kinds of models of rational decision-making to practical controversies, often without explicating the details. The more simple, traditional pair of scales model is best suited to cases in which one has to decide for or against some option, or to distribute goods among parties and strive for a compromise. What may be of more help in more complicated deliberations is the novel vectorial model, which is an instance of the general mathematical doctrine of the calculus of variations. To illustrate this distinction, I discuss some cases in which he apparently applied these models in different kinds of situation. These examples support the view that the models had a systematic value in his theory of practical rationality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research is based on the problems in secondary school algebra I have noticed in my own work as a teacher of mathematics. Algebra does not touch the pupil, it remains knowledge that is not used or tested. Furthermore the performance level in algebra is quite low. This study presents a model for 7th grade algebra instruction in order to make algebra more natural and useful to students. I refer to the instruction model as the Idea-based Algebra (IDEAA). The basic ideas of this IDEAA model are 1) to combine children's own informal mathematics with scientific mathematics ("math math") and 2) to structure algebra content as a "map of big ideas", not as a traditional sequence of powers, polynomials, equations, and word problems. This research project is a kind of design process or design research. As such, this project has three, intertwined goals: research, design and pedagogical practice. I also assume three roles. As a researcher, I want to learn about learning and school algebra, its problems and possibilities. As a designer, I use research in the intervention to develop a shared artefact, the instruction model. In addition, I want to improve the practice through intervention and research. A design research like this is quite challenging. Its goals and means are intertwined and change in the research process. Theory emerges from the inquiry; it is not given a priori. The aim to improve instruction is normative, as one should take into account what "good" means in school algebra. An important part of my study is to work out these paradigmatic questions. The result of the study is threefold. The main result is the instruction model designed in the study. The second result is the theory that is developed of the teaching, learning and algebra. The third result is knowledge of the design process. The instruction model (IDEAA) is connected to four main features of good algebra education: 1) the situationality of learning, 2) learning as knowledge building, in which natural language and intuitive thinking work as "intermediaries", 3) the emergence and diversity of algebra, and 4) the development of high performance skills at any stage of instruction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The forest simulator is a computerized model for predicting forest growth and future development as well as effects of forest harvests and treatments. The forest planning system is a decision support tool, usually including a forest simulator and an optimisation model, for finding the optimal forest management actions. The information produced by forest simulators and forest planning systems is used for various analytical purposes and in support of decision making. However, the quality and reliability of this information can often be questioned. Natural variation in forest growth and estimation errors in forest inventory, among other things, cause uncertainty in predictions of forest growth and development. This uncertainty stemming from different sources has various undesirable effects. In many cases outcomes of decisions based on uncertain information are something else than desired. The objective of this thesis was to study various sources of uncertainty and their effects in forest simulators and forest planning systems. The study focused on three notable sources of uncertainty: errors in forest growth predictions, errors in forest inventory data, and stochastic fluctuation of timber assortment prices. Effects of uncertainty were studied using two types of forest growth models, individual tree-level models and stand-level models, and with various error simulation methods. New method for simulating more realistic forest inventory errors was introduced and tested. Also, three notable sources of uncertainty were combined and their joint effects on stand-level net present value estimates were simulated. According to the results, the various sources of uncertainty can have distinct effects in different forest growth simulators. The new forest inventory error simulation method proved to produce more realistic errors. The analysis on the joint effects of various sources of uncertainty provided interesting knowledge about uncertainty in forest simulators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scots pine (Pinus sylvestris L.) and Norway spruce (Picea abies (L.) Karst.) forests dominate in Finnish Lapland. The need to study the effect of both soil factors and site preparation on the performance of planted Scots pine has increased due to the problems encountered in reforestation, especially on mesic and moist, formerly spruce-dominated sites. The present thesis examines soil hydrological properties and conditions, and effect of site preparation on them on 10 pine- and 10 spruce-dominated upland forest sites. Finally, the effects of both the site preparation and reforestation methods, and soil hydrology on the long-term performance of planted Scots pine are summarized. The results showed that pine and spruce sites differ significantly in their soil physical properties. Under field capacity or wetter soil moisture conditions, planted pines presumably suffer from excessive soil water and poor soil aeration on most of the originally spruce sites, but not on the pine sites. The results also suggested that site preparation affects the soil-water regime and thus prerequisites for forest growth over two decades after site preparation. High variation in the survival and mean height of planted pine was found. The study suggested that on spruce sites, pine survival is the lowest on sites that dry out slowly after rainfall events, and that height growth is the fastest on soils that reach favourable aeration conditions for root growth soon after saturation, and/or where the average air-filled porosity near field capacity is large enough for good root growth. Survival, but not mean height can be enhanced by employing intensive site preparation methods on spruce sites. On coarser-textured pine sites, site preparation methods don t affect survival, but methods affecting soil fertility, such as prescribed burning and ploughing, seem to enhance the height growth of planted Scots pines over several decades. The use of soil water content in situ as the sole criterion for sites suitable for pine reforestation was tested and found to be a relatively uncertain parameter. The thesis identified new potential soil variables, which should be tested using other data in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spring barley is the most important crop in Finland based on cultivated land area. Net blotch, a disease caused by Pyrenophora teres Drech., is the most damaging disease of barley in Finland. The pressure to improve the economics and efficiency of agriculture has increased the need for more efficient plant protection methods. Development of durable host-plant resistance to net blotch is a promising possibility. However, deployment of disease resistant crops could initiate selection pressure on the pathogen (P. teres) population. The aim of this study was to understand the population biology of P. teres and to estimate the evolutionary potential of P. teres under selective pressure following deployment of resistance genes and application of fungicides. The study included mainly Finnish P. teres isolates. Population samples from Russia and Australia were also included. Using AFLP markers substantial genotypic variation in P. teres populations was identified. Differences among isolates were least within Finnish fields and significantly higher in Krasnodar, Russia. Genetic differentiation was identified among populations from northern Europe and from Australia, and between the two forms P. teres f. teres (PTT, net form of net blotch) and P. teres f. maculata (PTM, spot form of net blotch) in Australia. Differentiation among populations was also identified based on virulence between Finnish and Russian populations, and based on prochloraz (fungicide) tolerance in the Häme region in Finland. Surprisingly only PTT was recovered from Finland and Russia although both forms were earlier equally common in Finland. The reason for the shift in occurrence of forms in Finland remained uncertain. Both forms were found within several fields in Australia. Sexual reproduction of P. teres was supported by recover of both mating types in equal ratio in those areas although the prevalence of sexual mating seems to be less in Finland than in Australia. Population from Krasnodar was an exception since only one mating type was found in there. Based on the substantial high genotypic variation in Krasnodar it was suggested go represent an old P. teres population, whereas the Australian samples were suggested to represent newer populations. In conclusion, P. teres populations are differentiated at several levels. Human assistance in dispersal of P. teres on infected barley seed is obvious and decreases the differentiation among populations. This can increase the plant protection problems caused by this pathogen. P. teres is capable of sexual reproduction in several areas but the prevalence varies. Based on these findings it is apparent that P. teres has the potential to pose more serious problems in barley cultivation if plant protection is neglected. Therefore, good agricultural practices, including crop rotation and the use of healthy seed, are recommended.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this thesis is to evaluate different means of increasing natural reproduction of migratory fish, especially salmon, in the river Kymijoki. The original stocks of migratory fish in Kymijoki were lost by the 1950s because of hydropower plants and worsened quality of water in the river. Nowadays the salmon stocks is based on hatchery-reared fish, even though there is significant potential of natural smolt production in the river. The main problem in the natural reproduction is that the migratory fish cannot ascend to the reproduction areas above the Korkeakoski and Koivukoski hydropower plants. In this thesis alternative projects which aim to open these ascencion routes and their costs and benefits are evaluated. The method used in the evaluation is social cost-benefit analysis. The alternative projects evaluated in this thesis consist of projects that aim to change the flow patterns between the eastern branches of Kymijoki and projects that involve building a fish ladder. Also different combinations of these projects are considered. The objective of this thesis is to find the project that is the most profitable to execute; this evaluation can be done in comparing the net present values of the projects. In addition to this, a sensitivity analysis will be made on the parameter values that are most uncertain. We compare the net present values of the projects with the net present values of hatchery-reared smolt releases, so we can evaluate, if the projects or the smolt releases are more socially profitable in the long term. The results of this thesis indicate that especially the projects that involve building a fish ladder next to the Korkeakoski hydropower plant are the most socially profitable. If this fish ladder would be built, the natural reproduction of salmon in the Kymijoki river could become so extensive, that hatchery-reared smolt releases could even be stopped. The results of the sensivity analysis indicate that the net present values of the projects depend especially on the initial smolt survival rate of wild salmon and the functioning of the potential fish ladder in Korkeakoski. Also the changes of other parameter values influence the results of the cost-benefit analysis, but not as significantly. When the net present values of the projects and the smolt releases are compared, the results depend on which period of time is selected to count the average catches of reared salmon. If the average of the last 5 years catches is used in counting the net benefits of smolt releases, all the alternative projects are more profitable than the releases. When the average of the last 10 years is used, only building of the fish ladder in Korkeakoski and all the project combinations are more profitable than the smolt releases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A vast amount of public services and goods are contracted through procurement auctions. Therefore it is very important to design these auctions in an optimal way. Typically, we are interested in two different objectives. The first objective is efficiency. Efficiency means that the contract is awarded to the bidder that values it the most, which in the procurement setting means the bidder that has the lowest cost of providing a service with a given quality. The second objective is to maximize public revenue. Maximizing public revenue means minimizing the costs of procurement. Both of these goals are important from the welfare point of view. In this thesis, I analyze field data from procurement auctions and show how empirical analysis can be used to help design the auctions to maximize public revenue. In particular, I concentrate on how competition, which means the number of bidders, should be taken into account in the design of auctions. In the first chapter, the main policy question is whether the auctioneer should spend resources to induce more competition. The information paradigm is essential in analyzing the effects of competition. We talk of a private values information paradigm when the bidders know their valuations exactly. In a common value information paradigm, the information about the value of the object is dispersed among the bidders. With private values more competition always increases the public revenue but with common values the effect of competition is uncertain. I study the effects of competition in the City of Helsinki bus transit market by conducting tests for common values. I also extend an existing test by allowing bidder asymmetry. The information paradigm seems to be that of common values. The bus companies that have garages close to the contracted routes are influenced more by the common value elements than those whose garages are further away. Therefore, attracting more bidders does not necessarily lower procurement costs, and thus the City should not implement costly policies to induce more competition. In the second chapter, I ask how the auctioneer can increase its revenue by changing contract characteristics like contract sizes and durations. I find that the City of Helsinki should shorten the contract duration in the bus transit auctions because that would decrease the importance of common value components and cheaply increase entry which now would have a more beneficial impact on the public revenue. Typically, cartels decrease the public revenue in a significant way. In the third chapter, I propose a new statistical method for detecting collusion and compare it with an existing test. I argue that my test is robust to unobserved heterogeneity unlike the existing test. I apply both methods to procurement auctions that contract snow removal in schools of Helsinki. According to these tests, the bidding behavior of two of the bidders seems consistent with a contract allocation scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examines Finnish economic growth. The key driver of economic growth was productivity. And the major engine of productivity growth was technology, especially the general purpose technologies (GPTs) electricity and ICT. A new GPT builds on previous knowledge, yet often in an uncertain, punctuated, fashion. Economic history, as well as the Finnish data analyzed in this study, teaches that growth is not a smooth process but is subject to episodes of sharp acceleration and deceleration which are associated with the arrival, diffusion and exhaustion of new general purpose technologies. These are technologies that affect the whole economy by transforming both household life and the ways in which firms conduct business. The findings of previous research, that Finnish economic growth exhibited late industrialisation and significant structural changes were corroborated by this study. Yet, it was not solely a story of manufacturing and structural change was more the effect of than the cause for economic growth. We offered an empirical resolution to the Artto-Pohjola paradox as we showed that a high rate of return on capital was combined with low capital productivity growth. This result is important in understanding Finnish economic growth 1975-90. The main contribution of this thesis was the growth accounting results on the impact of ICT on growth and productivity, as well as the comparison of electricity and ICT. It was shown that ICT s contribution to GDP growth was almost twice as large as electricity s contribution over comparable periods of time. Finland has thus been far more successful as an ICT producer than a producer of electricity. Unfortunately in the use of ICT the results were still more modest than for electricity. During the end of the period considered in this thesis, Finland switched from resource-based to ICT-based growth. However, given the large dependency on the ICT-producing sector, the ongoing outsourcing of ICT production to low wage countries provides a threat to productivity performance in the future. For a developed country only change is constant and history teaches us that it is likely that Finland is obliged to reorganize its economy once again in the digital era.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increase in global temperature has been attributed to increased atmospheric concentrations of greenhouse gases (GHG), mainly that of CO2. The threat of severe and complex socio-economic and ecological implications of climate change have initiated an international process that aims to reduce emissions, to increase C sinks, and to protect existing C reservoirs. The famous Kyoto protocol is an offspring of this process. The Kyoto protocol and its accords state that signatory countries need to monitor their forest C pools, and to follow the guidelines set by the IPCC in the preparation, reporting and quality assessment of the C pool change estimates. The aims of this thesis were i) to estimate the changes in carbon stocks vegetation and soil in the forests in Finnish forests from 1922 to 2004, ii) to evaluate the applied methodology by using empirical data, iii) to assess the reliability of the estimates by means of uncertainty analysis, iv) to assess the effect of forest C sinks on the reliability of the entire national GHG inventory, and finally, v) to present an application of model-based stratification to a large-scale sampling design of soil C stock changes. The applied methodology builds on the forest inventory measured data (or modelled stand data), and uses statistical modelling to predict biomasses and litter productions, as well as a dynamic soil C model to predict the decomposition of litter. The mean vegetation C sink of Finnish forests from 1922 to 2004 was 3.3 Tg C a-1, and in soil was 0.7 Tg C a-1. Soil is slowly accumulating C as a consequence of increased growing stock and unsaturated soil C stocks in relation to current detritus input to soil that is higher than in the beginning of the period. Annual estimates of vegetation and soil C stock changes fluctuated considerably during the period, were frequently opposite (e.g. vegetation was a sink but soil was a source). The inclusion of vegetation sinks into the national GHG inventory of 2003 increased its uncertainty from between -4% and 9% to ± 19% (95% CI), and further inclusion of upland mineral soils increased it to ± 24%. The uncertainties of annual sinks can be reduced most efficiently by concentrating on the quality of the model input data. Despite the decreased precision of the national GHG inventory, the inclusion of uncertain sinks improves its accuracy due to the larger sectoral coverage of the inventory. If the national soil sink estimates were prepared by repeated soil sampling of model-stratified sample plots, the uncertainties would be accounted for in the stratum formation and sample allocation. Otherwise, the increases of sampling efficiency by stratification remain smaller. The highly variable and frequently opposite annual changes in ecosystem C pools imply the importance of full ecosystem C accounting. If forest C sink estimates will be used in practice average sink estimates seem a more reasonable basis than the annual estimates. This is due to the fact that annual forest sinks vary considerably and annual estimates are uncertain, and they have severe consequences for the reliability of the total national GHG balance. The estimation of average sinks should still be based on annual or even more frequent data due to the non-linear decomposition process that is influenced by the annual climate. The methodology used in this study to predict forest C sinks can be transferred to other countries with some modifications. The ultimate verification of sink estimates should be based on comparison to empirical data, in which case the model-based stratification presented in this study can serve to improve the efficiency of the sampling design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hypokinesia, rigidity, tremor, and postural instability are the cardinal symptoms of Parkinson s disease (PD). Since these symptoms are not specific to PD the diagnosis may be uncertain in early PD. Etiology and pathogenesis of PD remain unclear. There is no neuroprotective therapy. Genetic findings are expected to reveal metabolic routes in PD pathogenesis and thereby eventually lead to therapeutic innovations. In this thesis, we first aimed to study the usefulness and accuracy of 123I-b-CIT SPECT in the diagnosis of PD in a consecutive clinic-based material including various movement disorders. We subsequently a genetic project to identify genetic risk factors for sporadic PD using a candidate gene approach in a case-control setting including 147 sporadic PD patients and 137 spouse controls. Dopamine transporter imaging by 123I-b-CIT SPECT could distinguish PD from essential tremor, drug-induced parkinsonism, dystonia and psychogenic parkinsonism. However, b-CIT uptake in Parkinson plus syndromes (PSP and multiple system atrophy) and dementia with Lewy bodies was not significantly different from PD. 123I-b-CIT SPECT could not reliably differentiate PD from vascular parkinsonism. 123I-b-CIT SPECT was 100% sensitive and specific in the diagnosis of PD in patients younger than 55 years but less specific in older patients, due to differential distribution of the above conditions in the younger and older age groups. 123I-b-CIT SPECT correlated with symptoms and detected bilateral nigrostriatal defect in patients whose PD was still in unilateral stage. Thus, in addition to as a differential diagnostic aid, 123I-b-CIT SPECT may be used to detect PD early, even pre-symptomatically in at-risk individuals. 123I-b-CIT SPECT was used to aid in the collection of patients to the genetic studies. In the genetic part of this thesis we found an association between PD and a polymorphic CAG-repeat in POLG1 gene encoding the catalytic subunit of mitochondrial polymerase gamma. The CAG-repeat encodes a polyglutamine tract (polyQ), the two most common lengths of which are 10Q (86-90%) and 11Q. In our Finnish material, the rarer non-10Q or non-11Q length variants (6Q-9Q, 12Q-14Q, 4R+9Q) were more frequent in patients than in spouse controls (10% vs. 3.5 %, p=0.003), or population controls (p=0.001). Therefore, we performed a replication study in 652 North American PD patients and 292 controls. Non-10/11Q alleles were more common in the US PD patients compared to the controls but the difference did not reach statistical significance (p=0.07). This larger data suggested our original definition of variant length allele might need reconsideration. Most previous studies on phenotypic effects of POLG1 polyQ have defined 10Q as the only normal allele. Non-10Q alleles were significantly more common in patients compared to the controls (17.3% vs. 12.3 %, p= 0.005). This association between non-10Q length variants and PD remained significant when compared to a larger set of 1541 literature controls (p=0.00005). In conclusion, POLG1 polyQ alleles other than 10Q may predispose to PD. We did not find association between PD and parkin or DJ-1, genes underlying autosomal recessive parkinsonism. The functional Val158Met polymorphism, which affects the catalytic effect of COMT enzyme, and another coding polymorphism in COMT were not associated with PD in our patient material. The APOE e2/3/4 polymorphism modifies risk for Alzheimer s disease and prognosis of for example brain trauma. APOE promoter and enhancer polymorphisms 219G/T and +113G/C, and APOE e3 haplotypes, have also been shown to modify the risk of Alzheimer s disease but not reported in PD. No association was found between PD and APOE e2/3/4 polymorphism, the promoter or enhancer polymorphisms, or the e3 haplotypes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation considers the problem of trust in the context of food consumption. The research perspectives refer to institutional conditions for consumer trust, personal practices of food consumption, and strategies consumers employ for controlling the safety of their food. The main concern of the study is to investigate consumer trust as an adequate response to food risks, i.e. a strategy helping the consumer to make safe choices in an uncertain food situation. "Risky" perspective serves as a frame of reference for understanding and explaining trust relations. The original aim of the study was to reveal the meanings applied to the concepts of trust, safety and risks in the perspective of market choices, the assessments of food risks and the ways of handling them. Supplementary research tasks presumed descriptions of institutional conditions for consumer trust, including descriptions of the food market, and the presentation of food consumption patterns in St. Petersburg. The main empirical material is based on qualitative interviews with consumers and interviews and group discussions with professional experts (market actors, representatives of inspection bodies and consumer organizations). Secondary material is used for describing institutional conditions for consumer trust and the market situation. The results suggest that the idea of consumer trust is associated with the reputation of suppliers, stable quality and taste of their products, and reliable food information. Being a subjectively constructed state connected to the act of acceptance, consumer trust results in positive buying decisions and stable preferences in the food market. The consumers' strategies that aim at safe food choices refer to repetitive interactions with reliable market actors that free them from constant consideration in the marketplace. Trust in food is highly mediated by trust in institutions involved in the food system. The analysis reveals a clear pattern of disbelief in the efficiency of institutional food control. The study analyses this as a reflection of "total distrust" that appears to be a dominant mood in many contexts of modern Russia. However, the interviewees emphasize the state's decisive role in suppressing risks in the food market. Also, the findings are discussed with reference to the consumers' possibilities of personal control over food risks. Three main responses to a risky food situation are identified: the reflexive approach, the traditional approach, and the fatalistic approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study focuses on self-employed industrial designers and how they emerge new venture ideas. More specifically, this study strives to determine what design entrepreneurs do when they create new venture ideas, how venture ideas are nurtured into being, and how the processes are organized to bring such ideas to the market in the given industrial context. In contemporary times when the concern for the creative class is peaking, the research and business communities need more insight of the kind that this study provides, namely how professionals may contribute to their entrepreneurial processes and other agents’ business processes. On the one hand, the interviews underlying this study suggest that design entrepreneurs may act as reactive service providers who are appointed by producers or marketing parties to generate product-related ideas on their behalf. On the other hand, the interviews suggest that proactive behaviour that aims on generating own venture ideas, may force design entrepreneurs to take considerable responsibility in organizing their entrepreneurial processes. Another option is that they strive to bring venture ideas to the market in collaboration, or by passing these to other agents’ product development processes. Design entrepreneurs’ venture ideas typically emerge from design related starting points and observations. Product developers are mainly engaged with creating their own ideas, whereas service providers refer mainly to the development of other agents’ venture ideas. In contrast with design entrepreneurs, external actors commonly emphasize customer demand as their primary source for new venture ideas, as well as development of these in close interaction with available means of production and marketing. Consequently, design entrepreneurs need to address market demand since without sales their venture ideas may as well be classified as art. In case, they want to experiment with creative ideas, then there should be another source of income to support this typically uncertain and extensive process. Currently, it appears like a lot of good venture ideas and resources are being wasted, when venture ideas do not suite available production or business procedures. Sufficient communication between design entrepreneurs and other agents would assist all parties in developing production efficient and distributable venture ideas. Overall, the findings suggest that design entrepreneurs are often involved simultaneously in several processes that aim at emerging new product related ventures. Consequently, design entrepreneurship is conceptualized in this study as a dual process. This implies that design entrepreneurs can simultaneously be in charge of their entrepreneurial processes, as they operate as resources in other agents’ business processes. The interconnection between activities and agents suggests that these kinds of processes tend to be both complex and multifaceted to their nature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sea level rise is among the most worrying consequences of climate change, and the biggest uncertainty of sea level predictions lies in the future behaviour of the ice sheets of Greenland and Antarctica. In this work, a literature review is made concerning the future of the Greenland ice sheet and the effect of its melting on Baltic Sea level. The relation between sea level and ice sheets is also considered more generally from a theoretical and historical point of view. Lately, surprisingly rapid changes in the amount of ice discharging into the sea have been observed along the coastal areas of the ice sheets, and the mass deficit of Greenland and West Antarctic ice sheets which are considered vulnerable to warming has been increasing from the 1990s. The changes are probably related to atmospheric or oceanic temperature variations which affect the flow speed of ice either via meltwater penetrating to the bottom of the ice sheet or via changes in the flow resistance generated by the floating parts of an ice stream. These phenomena are assumed to increase the mass deficit of the ice sheets in the warming climate; however, there is no comprehensive theory to explain and model them. Thus, it is not yet possible to make reliable predictions of the ice sheet contribution to sea level rise. On the grounds of the historical evidence it appears that sea level can rise rather rapidly, 1 2 metres per century, even during warm climate periods. Sea level rise projections of similar magnitude have been made with so-called semiempirical methods that are based on modelling the link between sea level and global mean temperature. Such a rapid rise would require considerable acceleration of the ice sheet flow. Stronger rise appears rather unlikely, among other things because the mountainous coastline restricts ice discharge from Greenland. The upper limit of sea level rise from Greenland alone has been estimated at half a metre by the end of this century. Due to changes in the Earth s gravity field, the sea level rise caused by melting ice is not spatially uniform. Near the melting ice sheet the sea level rise is considerably smaller than the global average, whereas farther away it is slightly greater than the average. Because of this phenomenon, the effect of the Greenland ice sheet on Baltic Sea level will probably be rather small during this century, 15 cm at most. Melting of the Antarctic ice sheet is clearly more dangerous for the Baltic Sea, but also very uncertain. It is likely that the sea level predictions will become more accurate in the near future as the ice sheet models develop.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ecology and evolutionary biology is the study of life on this planet. One of the many methods applied to answering the great diversity of questions regarding the lives and characteristics of individual organisms, is the utilization of mathematical models. Such models are used in a wide variety of ways. Some help us to reason, functioning as aids to, or substitutes for, our own fallible logic, thus making argumentation and thinking clearer. Models which help our reasoning can lead to conceptual clarification; by expressing ideas in algebraic terms, the relationship between different concepts become clearer. Other mathematical models are used to better understand yet more complicated models, or to develop mathematical tools for their analysis. Though helping us to reason and being used as tools in the craftmanship of science, many models do not tell us much about the real biological phenomena we are, at least initially, interested in. The main reason for this is that any mathematical model is a simplification of the real world, reducing the complexity and variety of interactions and idiosynchracies of individual organisms. What such models can tell us, however, both is and has been very valuable throughout the history of ecology and evolution. Minimally, a model simplifying the complex world can tell us that in principle, the patterns produced in a model could also be produced in the real world. We can never know how different a simplified mathematical representation is from the real world, but the similarity models do strive for, gives us confidence that their results could apply. This thesis deals with a variety of different models, used for different purposes. One model deals with how one can measure and analyse invasions; the expanding phase of invasive species. Earlier analyses claims to have shown that such invasions can be a regulated phenomena, that higher invasion speeds at a given point in time will lead to a reduction in speed. Two simple mathematical models show that analysis on this particular measure of invasion speed need not be evidence of regulation. In the context of dispersal evolution, two models acting as proof-of-principle are presented. Parent-offspring conflict emerges when there are different evolutionary optima for adaptive behavior for parents and offspring. We show that the evolution of dispersal distances can entail such a conflict, and that under parental control of dispersal (as, for example, in higher plants) wider dispersal kernels are optimal. We also show that dispersal homeostasis can be optimal; in a setting where dispersal decisions (to leave or stay in a natal patch) are made, strategies that divide their seeds or eggs into fractions that disperse or not, as opposed to randomized for each seed, can prevail. We also present a model of the evolution of bet-hedging strategies; evolutionary adaptations that occur despite their fitness, on average, being lower than a competing strategy. Such strategies can win in the long run because they have a reduced variance in fitness coupled with a reduction in mean fitness, and fitness is of a multiplicative nature across generations, and therefore sensitive to variability. This model is used for conceptual clarification; by developing a population genetical model with uncertain fitness and expressing genotypic variance in fitness as a product between individual level variance and correlations between individuals of a genotype. We arrive at expressions that intuitively reflect two of the main categorizations of bet-hedging strategies; conservative vs diversifying and within- vs between-generation bet hedging. In addition, this model shows that these divisions in fact are false dichotomies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Earth s climate is a highly dynamic and complex system in which atmospheric aerosols have been increasingly recognized to play a key role. Aerosol particles affect the climate through a multitude of processes, directly by absorbing and reflecting radiation and indirectly by changing the properties of clouds. Because of the complexity, quantification of the effects of aerosols continues to be a highly uncertain science. Better understanding of the effects of aerosols requires more information on aerosol chemistry. Before the determination of aerosol chemical composition by the various available analytical techniques, aerosol particles must be reliably sampled and prepared. Indeed, sampling is one of the most challenging steps in aerosol studies, since all available sampling techniques harbor drawbacks. In this study, novel methodologies were developed for sampling and determination of the chemical composition of atmospheric aerosols. In the particle-into-liquid sampler (PILS), aerosol particles grow in saturated water vapor with further impaction and dissolution in liquid water. Once in water, the aerosol sample can then be transported and analyzed by various off-line or on-line techniques. In this study, PILS was modified and the sampling procedure was optimized to obtain less altered aerosol samples with good time resolution. A combination of denuders with different coatings was tested to adsorb gas phase compounds before PILS. Mixtures of water with alcohols were introduced to increase the solubility of aerosols. Minimum sampling time required was determined by collecting samples off-line every hour and proceeding with liquid-liquid extraction (LLE) and analysis by gas chromatography-mass spectrometry (GC-MS). The laboriousness of LLE followed by GC-MS analysis next prompted an evaluation of solid-phase extraction (SPE) for the extraction of aldehydes and acids in aerosol samples. These two compound groups are thought to be key for aerosol growth. Octadecylsilica, hydrophilic-lipophilic balance (HLB), and mixed phase anion exchange (MAX) were tested as extraction materials. MAX proved to be efficient for acids, but no tested material offered sufficient adsorption for aldehydes. Thus, PILS samples were extracted only with MAX to guarantee good results for organic acids determined by liquid chromatography-mass spectrometry (HPLC-MS). On-line coupling of SPE with HPLC-MS is relatively easy, and here on-line coupling of PILS with HPLC-MS through the SPE trap produced some interesting data on relevant acids in atmospheric aerosol samples. A completely different approach to aerosol sampling, namely, differential mobility analyzer (DMA)-assisted filter sampling, was employed in this study to provide information about the size dependent chemical composition of aerosols and understanding of the processes driving aerosol growth from nano-size clusters to climatically relevant particles (>40 nm). The DMA was set to sample particles with diameters of 50, 40, and 30 nm and aerosols were collected on teflon or quartz fiber filters. To clarify the gas-phase contribution, zero gas-phase samples were collected by switching off the DMA every other 15 minutes. Gas-phase compounds were adsorbed equally well on both types of filter, and were found to contribute significantly to the total compound mass. Gas-phase adsorption is especially significant during the collection of nanometer-size aerosols and needs always to be taken into account. Other aims of this study were to determine the oxidation products of β-caryophyllene (the major sesquiterpene in boreal forest) in aerosol particles. Since reference compounds are needed for verification of the accuracy of analytical measurements, three oxidation products of β-caryophyllene were synthesized: β-caryophyllene aldehyde, β-nocaryophyllene aldehyde, and β-caryophyllinic acid. All three were identified for the first time in ambient aerosol samples, at relatively high concentrations, and their contribution to the aerosol mass (and probably growth) was concluded to be significant. Methodological and instrumental developments presented in this work enable fuller understanding of the processes behind biogenic aerosol formation and provide new tools for more precise determination of biosphere-atmosphere interactions.