51 resultados para LEVEL SET METHODS
Resumo:
Achieving sustainable consumption patterns is a crucial step on the way towards sustainability. The scientific knowledge used to decide which priorities to set and how to enforce them has to converge with societal, political, and economic initiatives on various levels: from individual household decision-making to agreements and commitments in global policy processes. The aim of this thesis is to draw a comprehensive and systematic picture of sustainable consumption and to do this it develops the concept of Strong Sustainable Consumption Governance. In this concept, consumption is understood as resource consumption. This includes consumption by industries, public consumption, and household consumption. Next to the availability of resources (including the available sink capacity of the ecosystem) and their use and distribution among the Earth’s population, the thesis also considers their contribution to human well-being. This implies giving specific attention to the levels and patterns of consumption. Methods: The thesis introduces the terminology and various concepts of Sustainable Consumption and of Governance. It briefly elaborates on the methodology of Critical Realism and its potential for analysing Sustainable Consumption. It describes the various methods on which the research is based and sets out the political implications a governance approach towards Strong Sustainable Consumption may have. Two models are developed: one for the assessment of the environmental relevance of consumption activities, another to identify the influences of globalisation on the determinants of consumption opportunities. Results: One of the major challenges for Strong Sustainable Consumption is that it is not in line with the current political mainstream: that is, the belief that economic growth can cure all our problems. So, the proponents have to battle against a strong headwind. Their motivation however is the conviction that there is no alternative. Efforts have to be taken on multiple levels by multiple actors. And all of them are needed as they constitute the individual strings that together make up the rope. However, everyone must ensure that they are pulling in the same direction. It might be useful to apply a carrot and stick strategy to stimulate public debate. The stick in this case is to create a sense of urgency. The carrot would be to articulate better the message to the public that a shrinking of the economy is not as much of a disaster as mainstream economics tends to suggest. In parallel to this it is necessary to demand that governments take responsibility for governance. The dominant strategy is still information provision. But there is ample evidence that hard policies like regulatory instruments and economic instruments are most effective. As for Civil Society Organizations it is recommended that they overcome the habit of promoting Sustainable (in fact green) Consumption by using marketing strategies and instead foster public debate in values and well-being. This includes appreciating the potential of social innovation. A countless number of such initiatives are on the way but their potential is still insufficiently explored. Beyond the question of how to multiply such approaches, it is also necessary to establish political macro structures to foster them.
Resumo:
Human-wildlife conflicts are today an integral part of the rural development discourse. In this research, the main focus is on the spatial explanation which is not a very common approach in the reviewed literature. My research hypothesis is based on the assumption that human-wildlife conflicts occur when a wild animal crosses a perceived borderline between the nature and culture and enters into the realms of the other. The borderline between nature and culture marks a perceived division of spatial content in our senses of place. The animal subject that crosses this border becomes a subject out of place meaning that the animal is then spatially located in a space where it should not be or where it does not belong according to tradition, custom, rules, law, public opinion, prevailing discourse or some other criteria set by human beings. An appearance of a wild animal in a domesticated space brings an uncontrolled subject into that space where humans have previously commanded total control of all other natural elements. A wild animal out of place may also threaten the biosecurity of the place in question. I carried out a case study in the Liwale district in south-eastern Tanzania to test my hypothesis during June and July 2002. I also collected documents and carried out interviews in Dar es Salaam in 2003. I studied the human-wildlife conflicts in six rural villages, where a total of 183 persons participated in the village meetings. My research methods included semi-structured interviews, participatory mapping, questionnaire survey and Q- methodology. The rural communities in the Liwale district have a long-history of co-existing with wildlife and they still have traditional knowledge of wildlife management and hunting. Wildlife conservation through the establishment of game reserves during the colonial era has escalated human-wildlife conflicts in the Liwale district. This study shows that the villagers perceive some wild animals differently in their images of the African countryside than the district and regional level civil servants do. From the small scale subsistence farmers point of views, wild animals continue to challenge the separation of the wild (the forests) and the domestics spaces (the cultivated fields) by moving across the perceived borders in search of food and shelter. As a result, the farmers may loose their crops, livestock or even their own lives in the confrontations of wild animals. Human-wildlife conflicts in the Liwale district are manifold and cannot be explained simply on the basis of attitudes or perceived images of landscapes. However, the spatial explanation of these conflicts provides us some more understanding of why human-wildlife conflicts are so widely found across the world.
Resumo:
In recent years, concern has arisen over the effects of increasing carbon dioxide (CO2) in the earth's atmosphere due to the burning of fossil fuels. One way to mitigate increase in atmospheric CO2 concentration and climate change is carbon sequestration to forest vegeta-tion through photosynthesis. Comparable regional scale estimates for the carbon balance of forests are therefore needed for scientific and political purposes. The aim of the present dissertation was to improve methods for quantifying and verifying inventory-based carbon pool estimates of the boreal forests in the mineral soils. Ongoing forest inventories provide a data based on statistically sounded sampling for estimating the level of carbon stocks and stock changes, but improved modelling tools and comparison of methods are still needed. In this dissertation, the entire inventory-based large-scale forest carbon stock assessment method was presented together with some separate methods for enhancing and comparing it. The enhancement methods presented here include ways to quantify the biomass of understorey vegetation as well as to estimate the litter production of needles and branches. In addition, the optical remote sensing method illustrated in this dis-sertation can be used to compare with independent data. The forest inventory-based large-scale carbon stock assessment method demonstrated here provided reliable carbon estimates when compared with independent data. Future ac-tivity to improve the accuracy of this method could consist of reducing the uncertainties regarding belowground biomass and litter production as well as the soil compartment. The methods developed will serve the needs for UNFCCC reporting and the reporting under the Kyoto Protocol. This method is principally intended for analysts or planners interested in quantifying carbon over extensive forest areas.
Resumo:
In Finland one of the most important current issues in the environmental management is the quality of surface waters. The increasing social importance of lakes and water systems has generated wide-ranging interest in lake restoration and management, concerning especially lakes suffering from eutrophication, but also from other environmental impacts. Most of the factors deteriorating the water quality in Finnish lakes are connected to human activities. Especially since the 1940's, the intensified farming practices and conduction of sewage waters from scattered settlements, cottages and industry have affected the lakes, which simultaneously have developed in to recreational areas for a growing number of people. Therefore, this study was focused on small lakes, which are human impacted, located close to settlement areas and have a significant value for local population. The aim of this thesis was to obtain information from lake sediment records for on-going lake restoration activities and to prove that a well planned, properly focused lake sediment study is an essential part of the work related to evaluation, target consideration and restoration of Finnish lakes. Altogether 11 lakes were studied. The study of Lake Kaljasjärvi was related to the gradual eutrophication of the lake. In lakes Ormajärvi, Suolijärvi, Lehee, Pyhäjärvi and Iso-Roine the main focus was on sediment mapping, as well as on the long term changes of the sedimentation, which were compared to Lake Pääjärvi. In Lake Hormajärvi the role of different kind of sedimentation environments in the eutrophication development of the lake's two basins were compared. Lake Orijärvi has not been eutrophied, but the ore exploitation and related acid main drainage from the catchment area have influenced the lake drastically and the changes caused by metal load were investigated. The twin lakes Etujärvi and Takajärvi are slightly eutrophied, but also suffer problems associated with the erosion of the substantial peat accumulations covering the fringe areas of the lakes. These peat accumulations are related to Holocene water level changes, which were investigated. The methods used were chosen case-specifically for each lake. In general, acoustic soundings of the lakes, detailed description of the nature of the sediment and determinations of the physical properties of the sediment, such as water content, loss on ignition and magnetic susceptibility were used, as was grain size analysis. A wide set of chemical analyses was also used. Diatom and chrysophycean cyst analyses were applied, and the diatom inferred total phosphorus content was reconstructed. The results of these studies prove, that the ideal lake sediment study, as a part of a lake management project, should be two-phased. In the first phase, thoroughgoing mapping of sedimentation patterns should be carried out by soundings and adequate corings. The actual sampling, based on the preliminary results, must include at least one long core from the main sedimentation basin for the determining the natural background state of the lake. The recent, artificially impacted development of the lake can then be determined by short-core and surface sediment studies. The sampling must be focused on the basis of the sediment mapping again, and it should represent all different sedimentation environments and bottom dynamic zones, considering the inlets and outlets, as well as the effects of possible point loaders of the lake. In practice, the budget of the lake management projects of is usually limited and only the most essential work and analyses can be carried out. The set of chemical and biological analyses and dating methods must therefore been thoroughly considered and adapted to the specific management problem. The results show also, that information obtained from a properly performed sediment study enhances the planning of the restoration, makes possible to define the target of the remediation activities and improves the cost-efficiency of the project.
Resumo:
This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.
Resumo:
Genetics, the science of heredity and variation in living organisms, has a central role in medicine, in breeding crops and livestock, and in studying fundamental topics of biological sciences such as evolution and cell functioning. Currently the field of genetics is under a rapid development because of the recent advances in technologies by which molecular data can be obtained from living organisms. In order that most information from such data can be extracted, the analyses need to be carried out using statistical models that are tailored to take account of the particular genetic processes. In this thesis we formulate and analyze Bayesian models for genetic marker data of contemporary individuals. The major focus is on the modeling of the unobserved recent ancestry of the sampled individuals (say, for tens of generations or so), which is carried out by using explicit probabilistic reconstructions of the pedigree structures accompanied by the gene flows at the marker loci. For such a recent history, the recombination process is the major genetic force that shapes the genomes of the individuals, and it is included in the model by assuming that the recombination fractions between the adjacent markers are known. The posterior distribution of the unobserved history of the individuals is studied conditionally on the observed marker data by using a Markov chain Monte Carlo algorithm (MCMC). The example analyses consider estimation of the population structure, relatedness structure (both at the level of whole genomes as well as at each marker separately), and haplotype configurations. For situations where the pedigree structure is partially known, an algorithm to create an initial state for the MCMC algorithm is given. Furthermore, the thesis includes an extension of the model for the recent genetic history to situations where also a quantitative phenotype has been measured from the contemporary individuals. In that case the goal is to identify positions on the genome that affect the observed phenotypic values. This task is carried out within the Bayesian framework, where the number and the relative effects of the quantitative trait loci are treated as random variables whose posterior distribution is studied conditionally on the observed genetic and phenotypic data. In addition, the thesis contains an extension of a widely-used haplotyping method, the PHASE algorithm, to settings where genetic material from several individuals has been pooled together, and the allele frequencies of each pool are determined in a single genotyping.
Resumo:
Microarrays are high throughput biological assays that allow the screening of thousands of genes for their expression. The main idea behind microarrays is to compute for each gene a unique signal that is directly proportional to the quantity of mRNA that was hybridized on the chip. A large number of steps and errors associated with each step make the generated expression signal noisy. As a result, microarray data need to be carefully pre-processed before their analysis can be assumed to lead to reliable and biologically relevant conclusions. This thesis focuses on developing methods for improving gene signal and further utilizing this improved signal for higher level analysis. To achieve this, first, approaches for designing microarray experiments using various optimality criteria, considering both biological and technical replicates, are described. A carefully designed experiment leads to signal with low noise, as the effect of unwanted variations is minimized and the precision of the estimates of the parameters of interest are maximized. Second, a system for improving the gene signal by using three scans at varying scanner sensitivities is developed. A novel Bayesian latent intensity model is then applied on these three sets of expression values, corresponding to the three scans, to estimate the suitably calibrated true signal of genes. Third, a novel image segmentation approach that segregates the fluorescent signal from the undesired noise is developed using an additional dye, SYBR green RNA II. This technique helped in identifying signal only with respect to the hybridized DNA, and signal corresponding to dust, scratch, spilling of dye, and other noises, are avoided. Fourth, an integrated statistical model is developed, where signal correction, systematic array effects, dye effects, and differential expression, are modelled jointly as opposed to a sequential application of several methods of analysis. The methods described in here have been tested only for cDNA microarrays, but can also, with some modifications, be applied to other high-throughput technologies. Keywords: High-throughput technology, microarray, cDNA, multiple scans, Bayesian hierarchical models, image analysis, experimental design, MCMC, WinBUGS.
Resumo:
Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.
Resumo:
In this thesis we present and evaluate two pattern matching based methods for answer extraction in textual question answering systems. A textual question answering system is a system that seeks answers to natural language questions from unstructured text. Textual question answering systems are an important research problem because as the amount of natural language text in digital format grows all the time, the need for novel methods for pinpointing important knowledge from the vast textual databases becomes more and more urgent. We concentrate on developing methods for the automatic creation of answer extraction patterns. A new type of extraction pattern is developed also. The pattern matching based approach chosen is interesting because of its language and application independence. The answer extraction methods are developed in the framework of our own question answering system. Publicly available datasets in English are used as training and evaluation data for the methods. The techniques developed are based on the well known methods of sequence alignment and hierarchical clustering. The similarity metric used is based on edit distance. The main conclusions of the research are that answer extraction patterns consisting of the most important words of the question and of the following information extracted from the answer context: plain words, part-of-speech tags, punctuation marks and capitalization patterns, can be used in the answer extraction module of a question answering system. This type of patterns and the two new methods for generating answer extraction patterns provide average results when compared to those produced by other systems using the same dataset. However, most answer extraction methods in the question answering systems tested with the same dataset are both hand crafted and based on a system-specific and fine-grained question classification. The the new methods developed in this thesis require no manual creation of answer extraction patterns. As a source of knowledge, they require a dataset of sample questions and answers, as well as a set of text documents that contain answers to most of the questions. The question classification used in the training data is a standard one and provided already in the publicly available data.
Resumo:
Ubiquitous computing is about making computers and computerized artefacts a pervasive part of our everyday lifes, bringing more and more activities into the realm of information. The computationalization, informationalization of everyday activities increases not only our reach, efficiency and capabilities but also the amount and kinds of data gathered about us and our activities. In this thesis, I explore how information systems can be constructed so that they handle this personal data in a reasonable manner. The thesis provides two kinds of results: on one hand, tools and methods for both the construction as well as the evaluation of ubiquitous and mobile systems---on the other hand an evaluation of the privacy aspects of a ubiquitous social awareness system. The work emphasises real-world experiments as the most important way to study privacy. Additionally, the state of current information systems as regards data protection is studied. The tools and methods in this thesis consist of three distinct contributions. An algorithm for locationing in cellular networks is proposed that does not require the location information to be revealed beyond the user's terminal. A prototyping platform for the creation of context-aware ubiquitous applications called ContextPhone is described and released as open source. Finally, a set of methodological findings for the use of smartphones in social scientific field research is reported. A central contribution of this thesis are the pragmatic tools that allow other researchers to carry out experiments. The evaluation of the ubiquitous social awareness application ContextContacts covers both the usage of the system in general as well as an analysis of privacy implications. The usage of the system is analyzed in the light of how users make inferences of others based on real-time contextual cues mediated by the system, based on several long-term field studies. The analysis of privacy implications draws together the social psychological theory of self-presentation and research in privacy for ubiquitous computing, deriving a set of design guidelines for such systems. The main findings from these studies can be summarized as follows: The fact that ubiquitous computing systems gather more data about users can be used to not only study the use of such systems in an effort to create better systems but in general to study phenomena previously unstudied, such as the dynamic change of social networks. Systems that let people create new ways of presenting themselves to others can be fun for the users---but the self-presentation requires several thoughtful design decisions that allow the manipulation of the image mediated by the system. Finally, the growing amount of computational resources available to the users can be used to allow them to use the data themselves, rather than just being passive subjects of data gathering.
Resumo:
PROFESSION, PERSON AND WORLDVIEW AT A TURNING POINT A Study of University Libraries and Library Staff in the Information Age 1970 - 2005 The incongruity between commonly held ideas of libraries and librarians and the changes that have occurred in libraries since 2000 provided the impulse for this work. The object is to find out if the changes of the last few decades have penetrated to a deeper level, that is, if they have caused changes in the values and world views of library staff and management. The study focuses on Finnish university libraries and the people who work in them. The theoretical framework is provided by the concepts of world view (values, the concept of time, man and self, the experience of the supernatural and the holy, community and leadership). The viewpoint, framework and methods of the study place it in the area of Comparative Religion by applying the world view framework. The time frame is the information age, which has deeply affected Finnish society and scholarly communication from 1970 to 2005. The source material of the study comprises 30 life stories; somewhat more than half of the stories come from the University of Helsinki, and the rest from the other eight universities. Written sources include library journals, planning documents and historical accounts of libraries. The experiences and research diaries of the research worker are also used as source material. The world view questions are discussed on different levels: 1) recognition of the differences and similarities in the values of the library sphere and the university sphere, 2) examination of the world view elements, community and leadership based on the life stories, and 3) the three phases of the effects of information technology on the university libraries and those who work in them. In comparing the values of the library sphere and the university sphere, the appreciation of creative work and culture as well as the founding principles of science and research are jointly held values. The main difference between the values in the university and library spheres concerns competition and service. Competition is part of the university as an institution of research work. The core value of the library sphere is service, which creates the essential ethos of library work. The ethical principles of the library sphere also include the values of democracy and equality as well as the value of intellectual freedom. There is also a difference between an essential value in the university sphere, the value of autonomy and academic freedom on the one hand, and the global value of the library sphere - organizing operations in a practical and efficient way on the other hand. Implementing this value can also create tension between the research community and the library. Based on the life stories, similarities can be found in the values of the library staff members. The value of service seems to be of primary importance for all who are committed to library work and who find it interesting and rewarding. The service role of the library staff can be extended from information services provider to include the roles of teacher, listener and even therapist, all needed in a competitive research community. The values of democracy and equality also emerge fairly strongly. The information age development has progressed in three phases in the libraries from the 1960s onward. In the third phase beginning in the mid 1990s, the increased usage of electronic resources has set fundamental changes in motion. The changes have affected basic values and the concept of time as well as the hierarchies and valuations within the library community. In addition to and as a replacement for the library possessing a local identity and operational model, a networked, global library is emerging. The changes have brought tension both to the library communities and to the relationship between the university community and the library. Future orientation can be said to be the key concept for change; it affects where the ideals and models for operations are taken from. Future orientation manifests itself as changes in metaphors, changes in the model of a good librarian and as communal valuations. Tension between the libraries and research communities can arise if the research community pictures the library primarily as a traditional library building with a local identity, whereas the 21st century library staff and directors are affected by future orientation and membership in a networked library sphere, working proactively to develop their libraries.
Resumo:
Wind power has grown fast internationally. It can reduce the environmental impact of energy production and increase energy security. Finland has turbine industry but wind electricity production has been slow, and nationally set capacity targets have not been met. I explored social factors that have affected the slow development of wind power in Finland by studying the perceptions of Finnish national level wind power actors. By that I refer to people who affect the development of wind power sector, such as officials, politicians, and representatives of wind industries and various organisations. The material consisted of interviews, a questionnaire, and written sources. The perceptions of wind power, its future, and methods to promote it were divided. They were studied through discourse analysis, content analysis, and scenario construction. Definition struggles affect views of the significance and potential of wind power in Finland, and also affect investments in wind power and wind power policy choices. Views of the future were demonstrated through scenarios. The views included scenarios of fast growth, but in the most pessimistic views, wind power was not thought to be competitive without support measures even in 2025, and the wind power capacity was correspondingly low. In such a scenario, policy tool choices were expected to remain similar to ones in use at the time of the interviews. So far, the development in Finland has followed closely this pessimistic scenario. Despite the scepticism about wind electricity production, wind turbine industry was seen as a credible industry. For many wind power actors as well as for the Finnish wind power policy, the turbine industry is a significant motive to promote wind power. Domestic electricity production and the export turbine industry are linked in discourse through so-called home market argumentation. Finnish policy tools have included subsidies, research and development funding, and information policies. The criteria used to evaluate policy measures were both process-oriented and value-based. Feed-in tariffs and green certificates that are common elsewhere have not been taken to use in Finland. Some interviewees considered such tools unsuitable for free electricity markets and for the Finnish policy style, dictatorial, and being against western values. Other interviewees supported their use because of their effectiveness. The current Finnish policy tools are not sufficiently effective to increase wind power production significantly. Marginalisation of wind power in discourses, pessimistic views of the future, and the view that the small consumer demand for wind electricity represents the political views of citizens towards promoting wind power, make it more difficult to take stronger policy measures to use. Wind power has not yet significantly contributed to the ecological modernisation of the energy sector in Finland, but the situation may change as the need to reduce emissions from energy production continues.
Resumo:
One major reason for the global decline of biodiversity is habitat loss and fragmentation. Conservation areas can be designed to reduce biodiversity loss, but as resources are limited, conservation efforts need to be prioritized in order to achieve best possible outcomes. The field of systematic conservation planning developed as a response to opportunistic approaches to conservation that often resulted in biased representation of biological diversity. The last two decades have seen the development of increasingly sophisticated methods that account for information about biodiversity conservation goals (benefits), economical considerations (costs) and socio-political constraints. In this thesis I focus on two general topics related to systematic conservation planning. First, I address two aspects of the question about how biodiversity features should be valued. (i) I investigate the extremely important but often neglected issue of differential prioritization of species for conservation. Species prioritization can be based on various criteria, and is always goal-dependent, but can also be implemented in a scientifically more rigorous way than what is the usual practice. (ii) I introduce a novel framework for conservation prioritization, which is based on continuous benefit functions that convert increasing levels of biodiversity feature representation to increasing conservation value using the principle that more is better. Traditional target-based systematic conservation planning is a special case of this approach, in which a step function is used for the benefit function. We have further expanded the benefit function framework for area prioritization to address issues such as protected area size and habitat vulnerability. In the second part of the thesis I address the application of community level modelling strategies to conservation prioritization. One of the most serious issues in systematic conservation planning currently is not the deficiency of methodology for selection and design, but simply the lack of data. Community level modelling offers a surrogate strategy that makes conservation planning more feasible in data poor regions. We have reviewed the available community-level approaches to conservation planning. These range from simplistic classification techniques to sophisticated modelling and selection strategies. We have also developed a general and novel community level approach to conservation prioritization that significantly improves on methods that were available before. This thesis introduces further degrees of realism into conservation planning methodology. The benefit function -based conservation prioritization framework largely circumvents the problematic phase of target setting, and allowing for trade-offs between species representation provides a more flexible and hopefully more attractive approach to conservation practitioners. The community-level approach seems highly promising and should prove valuable for conservation planning especially in data poor regions. Future work should focus on integrating prioritization methods to deal with multiple aspects in combination influencing the prioritization process, and further testing and refining the community level strategies using real, large datasets.
Resumo:
Biological invasions are considered as one of the greatest threats to biodiversity, as they may lead to disruption and homogenization of natural communities, and in the worst case, to native species extinctions. The introduction of gene modified organisms (GMOs) to agricultural, fisheries and forestry practices brings them into contact with natural populations. GMOs may appear as new invasive species if they are able to (1) invade into natural habitats or (2) hybridize with their wild relatives. The benefits of GMOs, such as increased yield or decreased use of insecticides or herbicides in cultivation, may thus be reduced due the potential risks they may cause. A careful ecological risk analysis therefore has to precede any responsible GMO introduction. In this thesis I study ecological invasion in relation to GMOs, and what kind of consequences invasion may have in natural populations. A set of theoretical models that combine life-history evolution, population dynamics, and population genetics were developed for the hazard identification part of ecological risks assessment of GMOs. In addition, the potential benefits of GMOs in management of an invasive pest were analyzed. In the first study I showed that a population that is fluctuating due to scramble-type density dependence (due to, e.g., nutrient competition in plants) may be invaded by a population that is relatively more limited by a resource (e.g., light in plants) that is a cause of contest-type density dependence. This result emphasises the higher risk of invasion in unstable environments. The next two studies focused on escape of a growth hormone (GH) transgenic fish into a natural population. The results showed that previous models may have given too pessimistic a view of the so called Trojan gene -effect, where the invading genotype is harmful for the population as a whole. The previously suggested population extinctions did not occur in my studies, since the changes in mating preferences caused by the GH-fish were be ameliorated by decreased level of competition. The GH-invaders may also have to exceed a threshold density before invasion can be successful. I also showed that the prevalence of mature parr (aka. sneaker) strategy among GH-fish may have clear effect on invasion outcome. The fourth study assessed the risks and developed methods against the invasion of the Colorado Potato Beetle (CPB, Leptinotarsa decemlineata). I showed that the eradication of CPB is most important for the prevention of their establishment, but the cultivation of transgenic Bt-potato could also be effective. In general, my results emphasise that invasion of transgenic species or genotypes to be possible under certain realistic conditions and resulting in competitive exclusion, population decline through outbreeding depression and genotypic displacement of native species. Ecological risk assessment should regard the decline and displacement of the wild genotype by an introduced one as a consequence that is as serious as the population extinction. It will also be crucial to take into account different kinds of behavioural differences among species when assessing the possible hazards that GMOs may cause if escaped. The benefits found of GMO crops effectiveness in pest management may also be too optimistic since CPB may evolve resistance to Bt-toxin. The models in this thesis could be further applied in case specific risk assessment of GMOs by supplementing them with detailed data of the species biology, the effect of the transgene introduced to the species, and also the characteristics of the populations or the environments in the risk of being invaded.
Resumo:
Composting refers to aerobic degradation of organic material and is one of the main waste treatment methods used in Finland for treating separated organic waste. The composting process allows converting organic waste to a humus-like end product which can be used to increase the organic matter in agricultural soils, in gardening, or in landscaping. Microbes play a key role as degraders during the composting-process, and the microbiology of composting has been studied for decades, but there are still open questions regarding the microbiota in industrial composting processes. It is known that with the traditional, culturing-based methods only a small fraction, below 1%, of the species in a sample is normally detected. In recent years an immense diversity of bacteria, fungi and archaea has been found to occupy many different environments. Therefore the methods of characterising microbes constantly need to be developed further. In this thesis the presence of fungi and bacteria in full-scale and pilot-scale composting processes was characterised with cloning and sequencing. Several clone libraries were constructed and altogether nearly 6000 clones were sequenced. The microbial communities detected in this study were found to differ from the compost microbes observed in previous research with cultivation based methods or with molecular methods from processes of smaller scale, although there were similarities as well. The bacterial diversity was high. Based on the non-parametric coverage estimations, the number of bacterial operational taxonomic units (OTU) in certain stages of composting was over 500. Sequences similar to Lactobacillus and Acetobacteria were frequently detected in the early stages of drum composting. In tunnel stages of composting the bacterial community comprised of Bacillus, Thermoactinomyces, Actinobacteria and Lactobacillus. The fungal diversity was found to be high and phylotypes similar to yeasts were abundantly found in the full-scale drum and tunnel processes. In addition to phylotypes similar to Candida, Pichia and Geotrichum moulds from genus Thermomyces and Penicillium were observed in tunnel stages of composting. Zygomycetes were detected in the pilot-scale composting processes and in the compost piles. In some of the samples there were a few abundant phylotypes present in the clone libraries that masked the rare ones. The rare phylotypes were of interest and a method for collecting them from clone libraries for sequencing was developed. With negative selection of the abundant phylotyps the rare ones were picked from the clone libraries. Thus 41% of the clones in the studied clone libraries were sequenced. Since microbes play a central role in composting and in many other biotechnological processes, rapid methods for characterization of microbial diversity would be of value, both scientifically and commercially. Current methods, however, lack sensitivity and specificity and are therefore under development. Microarrays have been used in microbial ecology for a decade to study the presence or absence of certain microbes of interest in a multiplex manner. The sequence database collected in this thesis was used as basis for probe design and microarray development. The enzyme assisted detection method, ligation-detection-reaction (LDR) based microarray, was adapted for species-level detection of microbes characteristic of each stage of the composting process. With the use of a specially designed control probe it was established that a species specific probe can detect target DNA representing as little as 0.04% of total DNA in a sample. The developed microarray can be used to monitor composting processes or the hygienisation of the compost end product. A large compost microbe sequence dataset was collected and analysed in this thesis. The results provide valuable information on microbial community composition during industrial scale composting processes. The microarray method was developed based on the sequence database collected in this study. The method can be utilised in following the fate of interesting microbes during composting process in an extremely sensitive and specific manner. The platform for the microarray is universal and the method can easily be adapted for studying microbes from environments other than compost.