18 resultados para subgrid-scale model
em Helda - Digital Repository of University of Helsinki
Resumo:
Numerical models, used for atmospheric research, weather prediction and climate simulation, describe the state of the atmosphere over the heterogeneous surface of the Earth. Several fundamental properties of atmospheric models depend on orography, i.e. on the average elevation of land over a model area. The higher is the models' resolution, the more the details of orography directly influence the simulated atmospheric processes. This sets new requirements for the accuracy of the model formulations with respect to the spatially varying orography. Orography is always averaged, representing the surface elevation within the horizontal resolution of the model. In order to remove the smallest scales and steepest slopes, the continuous spectrum of orography is normally filtered (truncated) even more, typically beyond a few gridlengths of the model. This means, that in the numerical weather prediction (NWP) models, there will always be subgridscale orography effects, which cannot be explicitly resolved by numerical integration of the basic equations, but require parametrization. In the subgrid-scale, different physical processes contribute in different scales. The parametrized processes interact with the resolved-scale processes and with each other. This study contributes to building of a consistent, scale-dependent system of orography-related parametrizations for the High Resolution Limited Area Model (HIRLAM). The system comprises schemes for handling the effects of mesoscale (MSO) and small-scale (SSO) orographic effects on the simulated flow and a scheme of orographic effects on the surface-level radiation fluxes. Representation of orography, scale-dependencies of the simulated processes and interactions between the parametrized and resolved processes are discussed. From the high-resolution digital elevation data, orographic parameters are derived for both momentum and radiation flux parametrizations. Tools for diagnostics and validation are developed and presented. The parametrization schemes applied, developed and validated in this study, are currently being implemented into the reference version of HIRLAM.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.
Resumo:
Parkinson’s disease (PD) is the second most common neurodegenerative disease among the elderly. Its etiology is unknown and no disease-modifying drugs are available. Thus, more information concerning its pathogenesis is needed. Among other genes, mutated PTEN-induced kinase 1 (PINK1) has been linked to early-onset and sporadic PD, but its mode of action is poorly understood. Most animal models of PD are based on the use of the neurotoxin 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP). MPTP is metabolized to MPP+ by monoamine oxidase B (MAO B) and causes cell death of dopaminergic neurons in the substantia nigra in mammals. Zebrafish has been a widely used model organism in developmental biology, but is now emerging as a model for human diseases due to its ideal combination of properties. Zebrafish are inexpensive and easy to maintain, develop rapidly, breed in large quantities producing transparent embryos, and are readily manipulated by various methods, particularly genetic ones. In addition, zebrafish are vertebrate animals and results derived from zebrafish may be more applicable to mammals than results from invertebrate genetic models such as Drosophila melanogaster and Caenorhabditis elegans. However, the similarity cannot be taken for granted. The aim of this study was to establish and test a PD model using larval zebrafish. The developing monoaminergic neuronal systems of larval zebrafish were investigated. We identified and classified 17 catecholaminergic and 9 serotonergic neuron populations in the zebrafish brain. A 3-dimensional atlas was created to facilitate future research. Only one gene encoding MAO was found in the zebrafish genome. Zebrafish MAO showed MAO A-type substrate specificity, but non-A-non-B inhibitor specificity. Distribution of MAO in larval and adult zebrafish brains was both diffuse and distinctly cellular. Inhibition of MAO during larval development led to markedly elevated 5-hydroxytryptamine (serotonin, 5-HT) levels, which decreased the locomotion of the fish. MPTP exposure caused a transient loss of cells in specific aminergic cell populations and decreased locomotion. MPTP-induced changes could be rescued by the MAO B inhibitor deprenyl, suggesting a role for MAO in MPTP toxicity. MPP+ affected only one catecholaminergic cell population; thus, the action of MPP+ was more selective than that of MPTP. The zebrafish PINK1 gene was cloned in zebrafish, and morpholino oligonucleotides were used to suppress its expression in larval zebrafish. The functional domains and expression pattern of zebrafish PINK1 resembled those of other vertebrates, suggesting that zebrafish is a feasible model for studying PINK1. Translation inhibition resulted in cell loss of the same catecholaminergic cell populations as MPTP and MPP+. Inactivation of PINK1 sensitized larval zebrafish to subefficacious doses of MPTP, causing a decrease in locomotion and cell loss in one dopaminergic cell population. Zebrafish appears to be a feasible model for studying PD, since its aminergic systems, mode of action of MPTP, and functions of PINK1 resemble those of mammalians. However, the functions of zebrafish MAO differ from the two forms of MAO found in mammals. Future studies using zebrafish PD models should utilize the advantages specific to zebrafish, such as the ability to execute large-scale genetic or drug screens.
Resumo:
During the last decades there has been a global shift in forest management from a focus solely on timber management to ecosystem management that endorses all aspects of forest functions: ecological, economic and social. This has resulted in a shift in paradigm from sustained yield to sustained diversity of values, goods and benefits obtained at the same time, introducing new temporal and spatial scales into forest resource management. The purpose of the present dissertation was to develop methods that would enable spatial and temporal scales to be introduced into the storage, processing, access and utilization of forest resource data. The methods developed are based on a conceptual view of a forest as a hierarchically nested collection of objects that can have a dynamically changing set of attributes. The temporal aspect of the methods consists of lifetime management for the objects and their attributes and of a temporal succession linking the objects together. Development of the forest resource data processing method concentrated on the extensibility and configurability of the data content and model calculations, allowing for a diverse set of processing operations to be executed using the same framework. The contribution of this dissertation to the utilisation of multi-scale forest resource data lies in the development of a reference data generation method to support forest inventory methods in approaching single-tree resolution.
Resumo:
Sichuanissa Tiibetin ylängön metsäkato on pysähtynyt mutta eroosio-ongelmat jatkuvat Viikin tropiikki-instituutin tutkija Ping ZHOU kartoitti trooppisen metsänhoidon alaan kuuluvassa väitöskirjatyössään maaperän eroosioalttiutta ja sen riippuvuutta metsäkasvillisuudesta Jangtsen tärkeää sivuhaaraa Min-jokea ympäröivällä n. 7400 neliökilometrin suuruisella valuma-alueella Sichuanin Aba-piirikunnassa. Aineistonaan hän käytti muun muassa satelliittikartoitustietoja ja mittaustuloksia yli 600 maastokoealalta. Tutkimuksen nimi suomeksi on "Maaperän eroosion mallinnus ja vuoristoisen valuma-alueen ekologinen ennallistaminen Sichuanissa Kiinassa". Aikaisempien tutkimusten perusteella oli tiedossa että metsien häviäminen tällä alueella pysähtyi jo 1980-luvun alussa. Sen jälkeen on metsien pinta-ala hitaasti kasvanut etupäässä sen vuoksi, että teollinen puunhakkuu luonnonmetsissä kiellettiin kokonaan v. 1998 ja 25 astetta jyrkemmillä rinteillä myös maatalouden harjoittaminen on saatu lopetetuksi viljelijöille tarjottujen taloudellisten houkuttimien avulla. Täten myös pelto- ja laidunmaata on voitu ennallistaa metsäksi. Ping Zhou pystyi jakamaan 5700 metrin korkeuteen saakka kohoavan vuoristoalueen eroosioalttiudeltaan erilaisiin vyöhykkeisiin rinteen kaltevuuden, sademäärän, kasvipeitteen ja maalajin perusteella. Noin 15 prosentilla tutkitun valuma-alueen pinta-alasta, lähinnä Min-joen pääuomaa ympäröivillä jyrkillä rinteillä, eroosioriski oli suuri tai erittäin suuri. Eri tyyppisellä kasvillisuudella oli hyvin erilainen vaikutus eroosioalttiuteen, ja myös alueen sijainti vuoriston eri korkeuksilla vaikutti eroosioon. Säästyneet lähes luonnontilaiset havumetsät, joita on etupäässä vuoriston ylimmissä osissa 2600-4000 metrin korkeudella, edistävät tehokkaasti metsän luontaista uudistumista ja levittäytymistä vaurioituneille alueille. Säilyneiden metsien puulajikoostumus antoi tutkimuksessa mahdollisuuden ennustaa metsien tulevaa kehitystä koko tutkitulla valuma-alueella sen eri korkeusvyöhykkeissä ja eri maaperätyypeillä. Ennallistamisen kannalta ongelmallisimpia olivat alueet joilta metsäpeite oli lähinnä puiden teollisen hakkuun vuoksi kokonaan hävinnyt ja joilla maaperä yleisesti oli eroosion pahoin kuluttama. Näillä alueilla ei ole tehty juuri mitään uudistamis- tai ennallistamistoimenpiteitä. Niillä metsien ennallistaminen vaatii myös puiden tai pensaiden istuttamista. Tähän sopivia ovat erityisesti ilmakehän typpeä sitovat lajit, joista alueella kasvaa luontaisena mm. sama tyrnilaji joka esiintyy myös Suomessa. Työssä tutkittiin yli kahdeksankymmenen paikallisen luontaisen puulajin (joista peräti noin kolmannes on havupuulajeja) ekologisia ominaisuuksia ja soveltuvuutta metsien ennallistamiseen. Avainasemassa työn onnistumisen kannalta ovat nyt paikalliset asukkaat, joiden maankäytön muutokset ovat jo selvästi edistänet luonnonmetsän ennalleen palautumista. Suomen Akatemia rahoitti vuosina 2004-2006 VITRI:n tutkimushanketta, josta Ping Zhou'n väitöskirjatyö muodosti keskeisen osan. Kenttätyö Sichuanissa avasi mahdollisuuden hedelmälliseen monitieteiseen yhteistyöhön ja tutkijavaihtoon Kiinan tiedeakatemian alaisen Chengdun biologiainstituutin (CIB) kanssa; tämä tieteellinen kanssakäyminen jatkuu edelleen.
Resumo:
Buffer zones are vegetated strip-edges of agricultural fields along watercourses. As linear habitats in agricultural ecosystems, buffer strips dominate and play a leading ecological role in many areas. This thesis focuses on the plant species diversity of the buffer zones in a Finnish agricultural landscape. The main objective of the present study is to identify the determinants of floral species diversity in arable buffer zones from local to regional levels. This study was conducted in a watershed area of a farmland landscape of southern Finland. The study area, Lepsämänjoki, is situated in the Nurmijärvi commune 30 km to the north of Helsinki, Finland. The biotope mosaics were mapped in GIS. A total of 59 buffer zones were surveyed, of which 29 buffer strips surveyed were also sampled by plot. Firstly, two diversity components (species richness and evenness) were investigated to determine whether the relationship between the two is equal and predictable. I found no correlation between species richness and evenness. The relationship between richness and evenness is unpredictable in a small-scale human-shaped ecosystem. Ordination and correlation analyses show that richness and evenness may result from different ecological processes, and thus should be considered separately. Species richness correlated negatively with phosphorus content, and species evenness correlated negatively with the ratio of organic carbon to total nitrogen in soil. The lack of a consistent pattern in the relationship between these two components may be due to site-specific variation in resource utilization by plant species. Within-habitat configuration (width, length, and area) were investigated to determine which is more effective for predicting species richness. More species per unit area increment could be obtained from widening the buffer strip than from lengthening it. The width of the strips is an effective determinant of plant species richness. The increase in species diversity with an increase in the width of buffer strips may be due to cross-sectional habitat gradients within the linear patches. This result can serve as a reference for policy makers, and has application value in agricultural management. In the framework of metacommunity theory, I found that both mass effect(connectivity) and species sorting (resource heterogeneity) were likely to explain species composition and diversity on a local and regional scale. The local and regional processes were interactively dominated by the degree to which dispersal perturbs local communities. In the lowly and intermediately connected regions, species sorting was of primary importance to explain species diversity, while the mass effect surpassed species sorting in the highly connected region. Increasing connectivity in communities containing high habitat heterogeneity can lead to the homogenization of local communities, and consequently, to lower regional diversity, while local species richness was unrelated to the habitat connectivity. Of all species found, Anthriscus sylvestris, Phalaris arundinacea, and Phleum pretense significantly responded to connectivity, and showed high abundance in the highly connected region. We suggest that these species may play a role in switching the force from local resources to regional connectivity shaping the community structure. On the landscape context level, the different responses of local species richness and evenness to landscape context were investigated. Seven landscape structural parameters served to indicate landscape context on five scales. On all scales but the smallest scales, the Shannon-Wiener diversity of land covers (H') correlated positively with the local richness. The factor (H') showed the highest correlation coefficients in species richness on the second largest scale. The edge density of arable field was the only predictor that correlated with species evenness on all scales, which showed the highest predictive power on the second smallest scale. The different predictive power of the factors on different scales showed a scaledependent relationship between the landscape context and local plant species diversity, and indicated that different ecological processes determine species richness and evenness. The local richness of species depends on a regional process on large scales, which may relate to the regional species pool, while species evenness depends on a fine- or coarse-grained farming system, which may relate to the patch quality of the habitats of field edges near the buffer strips. My results suggested some guidelines of species diversity conservation in the agricultural ecosystem. To maintain a high level of species diversity in the strips, a high level of phosphorus in strip soil should be avoided. Widening the strips is the most effective mean to improve species richness. Habitat connectivity is not always favorable to species diversity because increasing connectivity in communities containing high habitat heterogeneity can lead to the homogenization of local communities (beta diversity) and, consequently, to lower regional diversity. Overall, a synthesis of local and regional factors emerged as the model that best explain variations in plant species diversity. The studies also suggest that the effects of determinants on species diversity have a complex relationship with scale.
Resumo:
There exists various suggestions for building a functional and a fault-tolerant large-scale quantum computer. Topological quantum computation is a more exotic suggestion, which makes use of the properties of quasiparticles manifest only in certain two-dimensional systems. These so called anyons exhibit topological degrees of freedom, which, in principle, can be used to execute quantum computation with intrinsic fault-tolerance. This feature is the main incentive to study topological quantum computation. The objective of this thesis is to provide an accessible introduction to the theory. In this thesis one has considered the theory of anyons arising in two-dimensional quantum mechanical systems, which are described by gauge theories based on so called quantum double symmetries. The quasiparticles are shown to exhibit interactions and carry quantum numbers, which are both of topological nature. Particularly, it is found that the addition of the quantum numbers is not unique, but that the fusion of the quasiparticles is described by a non-trivial fusion algebra. It is discussed how this property can be used to encode quantum information in a manner which is intrinsically protected from decoherence and how one could, in principle, perform quantum computation by braiding the quasiparticles. As an example of the presented general discussion, the particle spectrum and the fusion algebra of an anyon model based on the gauge group S_3 are explicitly derived. The fusion algebra is found to branch into multiple proper subalgebras and the simplest one of them is chosen as a model for an illustrative demonstration. The different steps of a topological quantum computation are outlined and the computational power of the model is assessed. It turns out that the chosen model is not universal for quantum computation. However, because the objective was a demonstration of the theory with explicit calculations, none of the other more complicated fusion subalgebras were considered. Studying their applicability for quantum computation could be a topic of further research.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
Advancements in the analysis techniques have led to a rapid accumulation of biological data in databases. Such data often are in the form of sequences of observations, examples including DNA sequences and amino acid sequences of proteins. The scale and quality of the data give promises of answering various biologically relevant questions in more detail than what has been possible before. For example, one may wish to identify areas in an amino acid sequence, which are important for the function of the corresponding protein, or investigate how characteristics on the level of DNA sequence affect the adaptation of a bacterial species to its environment. Many of the interesting questions are intimately associated with the understanding of the evolutionary relationships among the items under consideration. The aim of this work is to develop novel statistical models and computational techniques to meet with the challenge of deriving meaning from the increasing amounts of data. Our main concern is on modeling the evolutionary relationships based on the observed molecular data. We operate within a Bayesian statistical framework, which allows a probabilistic quantification of the uncertainties related to a particular solution. As the basis of our modeling approach we utilize a partition model, which is used to describe the structure of data by appropriately dividing the data items into clusters of related items. Generalizations and modifications of the partition model are developed and applied to various problems. Large-scale data sets provide also a computational challenge. The models used to describe the data must be realistic enough to capture the essential features of the current modeling task but, at the same time, simple enough to make it possible to carry out the inference in practice. The partition model fulfills these two requirements. The problem-specific features can be taken into account by modifying the prior probability distributions of the model parameters. The computational efficiency stems from the ability to integrate out the parameters of the partition model analytically, which enables the use of efficient stochastic search algorithms.
Resumo:
Large-scale chromosome rearrangements such as copy number variants (CNVs) and inversions encompass a considerable proportion of the genetic variation between human individuals. In a number of cases, they have been closely linked with various inheritable diseases. Single-nucleotide polymorphisms (SNPs) are another large part of the genetic variance between individuals. They are also typically abundant and their measuring is straightforward and cheap. This thesis presents computational means of using SNPs to detect the presence of inversions and deletions, a particular variety of CNVs. Technically, the inversion-detection algorithm detects the suppressed recombination rate between inverted and non-inverted haplotype populations whereas the deletion-detection algorithm uses the EM-algorithm to estimate the haplotype frequencies of a window with and without a deletion haplotype. As a contribution to population biology, a coalescent simulator for simulating inversion polymorphisms has been developed. Coalescent simulation is a backward-in-time method of modelling population ancestry. Technically, the simulator also models multiple crossovers by using the Counting model as the chiasma interference model. Finally, this thesis includes an experimental section. The aforementioned methods were tested on synthetic data to evaluate their power and specificity. They were also applied to the HapMap Phase II and Phase III data sets, yielding a number of candidates for previously unknown inversions, deletions and also correctly detecting known such rearrangements.
Resumo:
During the past ten years, large-scale transcript analysis using microarrays has become a powerful tool to identify and predict functions for new genes. It allows simultaneous monitoring of the expression of thousands of genes and has become a routinely used tool in laboratories worldwide. Microarray analysis will, together with other functional genomics tools, take us closer to understanding the functions of all genes in genomes of living organisms. Flower development is a genetically regulated process which has mostly been studied in the traditional model species Arabidopsis thaliana, Antirrhinum majus and Petunia hybrida. The molecular mechanisms behind flower development in them are partly applicable in other plant systems. However, not all biological phenomena can be approached with just a few model systems. In order to understand and apply the knowledge to ecologically and economically important plants, other species also need to be studied. Sequencing of 17 000 ESTs from nine different cDNA libraries of the ornamental plant Gerbera hybrida made it possible to construct a cDNA microarray with 9000 probes. The probes of the microarray represent all different ESTs in the database. From the gerbera ESTs 20% were unique to gerbera while 373 were specific to the Asteraceae family of flowering plants. Gerbera has composite inflorescences with three different types of flowers that vary from each other morphologically. The marginal ray flowers are large, often pigmented and female, while the central disc flowers are smaller and more radially symmetrical perfect flowers. Intermediate trans flowers are similar to ray flowers but smaller in size. This feature together with the molecular tools applied to gerbera, make gerbera a unique system in comparison to the common model plants with only a single kind of flowers in their inflorescence. In the first part of this thesis, conditions for gerbera microarray analysis were optimised including experimental design, sample preparation and hybridization, as well as data analysis and verification. Moreover, in the first study, the flower and flower organ-specific genes were identified. After the reliability and reproducibility of the method were confirmed, the microarrays were utilized to investigate transcriptional differences between ray and disc flowers. This study revealed novel information about the morphological development as well as the transcriptional regulation of early stages of development in various flower types of gerbera. The most interesting finding was differential expression of MADS-box genes, suggesting the existence of flower type-specific regulatory complexes in the specification of different types of flowers. The gerbera microarray was further used to profile changes in expression during petal development. Gerbera ray flower petals are large, which makes them an ideal model to study organogenesis. Six different stages were compared and specifically analysed. Expression profiles of genes related to cell structure and growth implied that during stage two, cells divide, a process which is marked by expression of histones, cyclins and tubulins. Stage 4 was found to be a transition stage between cell division and expansion and by stage 6 cells had stopped division and instead underwent expansion. Interestingly, at the last analysed stage, stage 9, when cells did not grow any more, the highest number of upregulated genes was detected. The gerbera microarray is a fully-functioning tool for large-scale studies of flower development and correlation with real-time RT-PCR results show that it is also highly sensitive and reliable. Gene expression data presented here will be a source for gene expression mining or marker gene discovery in the future studies that will be performed in the Gerbera Laboratory. The publicly available data will also serve the plant research community world-wide.
Resumo:
F4 fimbriae of enterotoxigenic Escherichia coli (ETEC) are highly stable multimeric structures with a capacity to evoke mucosal immune responses. With these characters F4 offer a unique model system to study oral vaccination against ETEC-induced porcine postweaning diarrhea. Postweaning diarrhea is a major problem in piggeries worldwide and results in significant economic losses. No vaccine is currently available to protect weaned piglets against ETEC infections. Transgenic plants provide an economically feasible platform for large-scale production of vaccine antigens for animal health. In this study, the capacity of transgenic plants to produce FaeG protein, the major structural subunit and adhesin of F4 fimbria, was evaluated. Using the model plant tobacco, the optimal subcellular location for FaeG accumulation was examined. Targeting of FaeG into chloroplasts offered a superior accumulation level of 1% of total soluble proteins (TSP) over the other investigated subcellular locations, namely, the endoplasmic reticulum and the apoplast. Moreover, we determined whether the FaeG protein, when isolated from its fimbrial background and produced in a plant cell, would retain the key properties of an oral vaccine, i.e. stability in gastrointestinal conditions, binding to porcine intestinal F4 receptors (F4R), and inhibition of the F4-possessing (F4+) ETEC attachment to F4R. The chloroplast-derived FaeG protein did show resistance against low pH and proteolysis in the simulated gastrointestinal conditions and was able to bind to the F4R, subsequently inhibiting the F4+ ETEC binding in a dose-dependent manner. To investigate the oral immunogenicity of FaeG protein, the edible crop plant alfalfa was transformed with the chloroplast-targeting construct and equally to tobacco plants, a high-yield FaeG accumulation of 1% of TSP was obtained. A similar yield was also obtained in the seeds of barley, a valuable crop plant, when the FaeG-encoding gene was expressed under an endosperm-specific promoter and subcellularly targeted into the endoplasmic reticulum. Furthermore, desiccated alfalfa plants and barley grains were shown to have a capacity to store FaeG protein in a stable form for years. When the transgenic alfalfa plants were administred orally to weaned piglets, slight F4-specific systemic and mucosal immune responses were induced. Co-administration of the transgenic alfalfa and the mucosal adjuvant cholera toxin enhanced the F4-specific immune response; the duration and number of F4+ E. coli excretion following F4+ ETEC challenge were significantly reduced as compared with pigs that had received nontransgenic plant material. In conclusion, the results suggest that transgenic plants producing the FaeG subunit protein could be used for production and delivery of oral vaccines against porcine F4+ ETEC infections. The findings here thus present new approaches to develop the vaccination strategy against porcine postweaning diarrhea.
Resumo:
The increase in global temperature has been attributed to increased atmospheric concentrations of greenhouse gases (GHG), mainly that of CO2. The threat of severe and complex socio-economic and ecological implications of climate change have initiated an international process that aims to reduce emissions, to increase C sinks, and to protect existing C reservoirs. The famous Kyoto protocol is an offspring of this process. The Kyoto protocol and its accords state that signatory countries need to monitor their forest C pools, and to follow the guidelines set by the IPCC in the preparation, reporting and quality assessment of the C pool change estimates. The aims of this thesis were i) to estimate the changes in carbon stocks vegetation and soil in the forests in Finnish forests from 1922 to 2004, ii) to evaluate the applied methodology by using empirical data, iii) to assess the reliability of the estimates by means of uncertainty analysis, iv) to assess the effect of forest C sinks on the reliability of the entire national GHG inventory, and finally, v) to present an application of model-based stratification to a large-scale sampling design of soil C stock changes. The applied methodology builds on the forest inventory measured data (or modelled stand data), and uses statistical modelling to predict biomasses and litter productions, as well as a dynamic soil C model to predict the decomposition of litter. The mean vegetation C sink of Finnish forests from 1922 to 2004 was 3.3 Tg C a-1, and in soil was 0.7 Tg C a-1. Soil is slowly accumulating C as a consequence of increased growing stock and unsaturated soil C stocks in relation to current detritus input to soil that is higher than in the beginning of the period. Annual estimates of vegetation and soil C stock changes fluctuated considerably during the period, were frequently opposite (e.g. vegetation was a sink but soil was a source). The inclusion of vegetation sinks into the national GHG inventory of 2003 increased its uncertainty from between -4% and 9% to ± 19% (95% CI), and further inclusion of upland mineral soils increased it to ± 24%. The uncertainties of annual sinks can be reduced most efficiently by concentrating on the quality of the model input data. Despite the decreased precision of the national GHG inventory, the inclusion of uncertain sinks improves its accuracy due to the larger sectoral coverage of the inventory. If the national soil sink estimates were prepared by repeated soil sampling of model-stratified sample plots, the uncertainties would be accounted for in the stratum formation and sample allocation. Otherwise, the increases of sampling efficiency by stratification remain smaller. The highly variable and frequently opposite annual changes in ecosystem C pools imply the importance of full ecosystem C accounting. If forest C sink estimates will be used in practice average sink estimates seem a more reasonable basis than the annual estimates. This is due to the fact that annual forest sinks vary considerably and annual estimates are uncertain, and they have severe consequences for the reliability of the total national GHG balance. The estimation of average sinks should still be based on annual or even more frequent data due to the non-linear decomposition process that is influenced by the annual climate. The methodology used in this study to predict forest C sinks can be transferred to other countries with some modifications. The ultimate verification of sink estimates should be based on comparison to empirical data, in which case the model-based stratification presented in this study can serve to improve the efficiency of the sampling design.
Resumo:
The planet Mars is the Earth's neighbour in the Solar System. Planetary research stems from a fundamental need to explore our surroundings, typical for mankind. Manned missions to Mars are already being planned, and understanding the environment to which the astronauts would be exposed is of utmost importance for a successful mission. Information of the Martian environment given by models is already now used in designing the landers and orbiters sent to the red planet. In particular, studies of the Martian atmosphere are crucial for instrument design, entry, descent and landing system design, landing site selection, and aerobraking calculations. Research of planetary atmospheres can also contribute to atmospheric studies of the Earth via model testing and development of parameterizations: even after decades of modeling the Earth's atmosphere, we are still far from perfect weather predictions. On a global level, Mars has also been experiencing climate change. The aerosol effect is one of the largest unknowns in the present terrestrial climate change studies, and the role of aerosol particles in any climate is fundamental: studies of climate variations on another planet can help us better understand our own global change. In this thesis I have used an atmospheric column model for Mars to study the behaviour of the lowest layer of the atmosphere, the planetary boundary layer (PBL), and I have developed nucleation (particle formation) models for Martian conditions. The models were also coupled to study, for example, fog formation in the PBL. The PBL is perhaps the most significant part of the atmosphere for landers and humans, since we live in it and experience its state, for example, as gusty winds, nightfrost, and fogs. However, PBL modelling in weather prediction models is still a difficult task. Mars hosts a variety of cloud types, mainly composed of water ice particles, but also CO2 ice clouds form in the very cold polar night and at high altitudes elsewhere. Nucleation is the first step in particle formation, and always includes a phase transition. Cloud crystals on Mars form from vapour to ice on ubiquitous, suspended dust particles. Clouds on Mars have a small radiative effect in the present climate, but it may have been more important in the past. This thesis represents an attempt to model the Martian atmosphere at the smallest scales with high resolution. The models used and developed during the course of the research are useful tools for developing and testing parameterizations for larger-scale models all the way up to global climate models, since the small-scale models can describe processes that in the large-scale models are reduced to subgrid (not explicitly resolved) scale.