987 resultados para graphical methods
Resumo:
In recent years, concern has arisen over the effects of increasing carbon dioxide (CO2) in the earth's atmosphere due to the burning of fossil fuels. One way to mitigate increase in atmospheric CO2 concentration and climate change is carbon sequestration to forest vegeta-tion through photosynthesis. Comparable regional scale estimates for the carbon balance of forests are therefore needed for scientific and political purposes. The aim of the present dissertation was to improve methods for quantifying and verifying inventory-based carbon pool estimates of the boreal forests in the mineral soils. Ongoing forest inventories provide a data based on statistically sounded sampling for estimating the level of carbon stocks and stock changes, but improved modelling tools and comparison of methods are still needed. In this dissertation, the entire inventory-based large-scale forest carbon stock assessment method was presented together with some separate methods for enhancing and comparing it. The enhancement methods presented here include ways to quantify the biomass of understorey vegetation as well as to estimate the litter production of needles and branches. In addition, the optical remote sensing method illustrated in this dis-sertation can be used to compare with independent data. The forest inventory-based large-scale carbon stock assessment method demonstrated here provided reliable carbon estimates when compared with independent data. Future ac-tivity to improve the accuracy of this method could consist of reducing the uncertainties regarding belowground biomass and litter production as well as the soil compartment. The methods developed will serve the needs for UNFCCC reporting and the reporting under the Kyoto Protocol. This method is principally intended for analysts or planners interested in quantifying carbon over extensive forest areas.
Resumo:
The driving force behind this study has been the need to develop and apply methods for investigating the hydrogeochemical processes of significance to water management and artificial groundwater recharge. Isotope partitioning of elements in the course of physicochemical processes produces isotopic variations to their natural reservoirs. Tracer property of the stable isotope abundances of oxygen, hydrogen and carbon has been applied to investigate hydrogeological processes in Finland. The work described here has initiated the use of stable isotope methods to achieve a better understanding of these processes in the shallow glacigenic formations of Finland. In addition, the regional precipitation and groundwater records will supplement the data of global precipitation, but as importantly, provide primary background data for hydrological studies. The isotopic composition of oxygen and hydrogen in Finnish groundwaters and atmospheric precipitation was determined in water samples collected during 1995 2005. Prior to this study, no detailed records existed on the spatial or annual variability of the isotopic composition of precipitation or groundwaters in Finland. Groundwaters and precipitation in Finland display a distinct spatial distribution of the isotopic ratios of oxygen and hydrogen. The depletion of the heavier isotopes as a function of increasing latitude is closely related to the local mean surface temperature. No significant differences were observed between the mean annual isotope ratios of oxygen and hydrogen in precipitation and those in local groundwaters. These results suggest that the link between the spatial variability in the isotopic composition of precipitation and local temperature is preserved in groundwaters. Artificial groundwater recharge to glaciogenic sedimentary formations offers many possibilities to apply the isotopic ratios of oxygen, hydrogen and carbon as natural isotopic tracers. In this study the systematics of dissolved carbon have been investigated in two geochemically different glacigenic groundwater formations: a typical esker aquifer at Tuusula, in southern Finland and a carbonate-bearing aquifer with a complex internal structure at Virttaankangas, in southwest Finland. Reducing the concentration of dissolved organic carbon (DOC) in water is a primary challenge in the process of artificial groundwater recharge. The carbon isotope method was used to as a tool to trace the role of redox processes in the decomposition of DOC. At the Tuusula site, artificial recharge leads to a significant decrease in the organic matter content of the infiltrated water. In total, 81% of the initial DOC present in the infiltrated water was removed in three successive stages of subsurface processes. Three distinct processes in the reduction of the DOC content were traced: The decomposition of dissolved organic carbon in the first stage of subsurface flow appeared to be the most significant part in DOC removal, whereas further decrease in DOC has been attributed to adsorption and finally to dilution with local groundwater. Here, isotope methods were used for the first time to quantify the processes of DOC removal in an artificial groundwater recharge. Groundwaters in the Virttaankangas aquifer are characterized by high pH values exceeding 9, which are exceptional for shallow aquifers on glaciated crystalline bedrock. The Virttaankangas sediments were discovered to contain trace amounts of fine grained, dispersed calcite, which has a high tendency to increase the pH of local groundwaters. Understanding the origin of the unusual geochemistry of the Virttaankangas groundwaters is an important issue for constraining the operation of the future artificial groundwater plant. The isotope ratios of oxygen and carbon in sedimentary carbonate minerals have been successfully applied to constrain the origin of the dispersed calcite in the Virttaankangas sediments. The isotopic and chemical characteristics of the groundwater in the distinct units of aquifer were observed to vary depending on the aquifer mineralogy, groundwater residence time and the openness of the system to soil CO2. The high pH values of > 9 have been related to dissolution of calcite into groundwater under closed or nearly closed system conditions relative to soil CO2, at a low partial pressure of CO2.
Resumo:
Context Most studies assess pollination success at capsule maturity, and studies of pre-zygotic processes are often lacking. Aims This study investigates the suitability of controlled pollination for a potential forestry plantation species, Eucalyptus argophloia, by examining pre- and post-zygotic pollination success. Methods Pollen tube development, capsule set and seed set are compared following three-stop pollination, artificially induced protogyny (AIP), AIP unpollinated and open pollination. The fecundity of stored pollen was compared with that of fresh pollen. Results Three-stop pollination, AIP and open pollination had similar numbers of pollen tubes, but AIP unpollinated had none. Open pollination produced significantly more capsules and total number of seeds than the other treatments. There were significantly more seeds per retained capsule for the open pollination and three-stop pollination treatments than for the AIP and AIP unpollinated pollination treatments. There were no significant differences relative to the age of pollen. Conclusions Pre-zygotic success in terms of pollen tubes was similar for open-pollinated, three stop and AIP, but was not reflected in post-zygotic success when the open pollination and three-stop method produced significantly more seeds per retained capsule than the AIP treatments and open pollination yielded more seeds. Capsule set and total seed set for open pollination, and fewer capsules in controlled pollinations, may reflect physical damage to buds because of the small E. argophloia flowers. Suitable alternative breeding strategies other than controlled pollinations are discussed for this species.
Resumo:
This Paper deals with the analysis of liquid limit of soils, an inferential parameter of universal acceptance. It has been undertaken primarily to re-examine one-point methods of determination of liquid limit water contents. It has been shown by basic characteristics of soils and associated physico-chemical factors that critical shear strengths at liquid limit water contents arise out of force field equilibrium and are independent of soil type. This leads to the formation of a scientific base for liquid limit determination by one-point methods, which hitherto was formulated purely on statistical analysis of data. Available methods (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) of one-point liquid limit determination have been critically re-examined. A simple one-point cone penetrometer method of computing liquid limit has been suggested and compared with other methods. Experimental data of Sherwood & Ryley (1970) have been employed for comparison of different cone penetration methods. Results indicate that, apart from mere statistical considerations, one-point methods have a strong scientific base on the uniqueness of modified flow line irrespective of soil type. Normalized flow line is obtained by normalization of water contents by liquid limit values thereby nullifying the effects of surface areas and associated physico-chemical factors that are otherwise reflected in different responses at macrolevel.Cet article traite de l'analyse de la limite de liquidité des sols, paramètre déductif universellement accepté. Cette analyse a été entreprise en premier lieu pour ré-examiner les méthodes à un point destinées à la détermination de la teneur en eau à la limite de liquidité. Il a été démontré par les caractéristiques fondamentales de sols et par des facteurs physico-chimiques associés que les résistances critiques à la rupture au cisaillement pour des teneurs en eau à la limite de liquidité résultent de l'équilibre des champs de forces et sont indépendantes du type de sol concerné. On peut donc constituer une base scientifique pour la détermination de la limite de liquidité par des méthodes à un point lesquelles, jusqu'alors, n'avaient été formulées que sur la base d'une analyse statistique des données. Les méthodes dont on dispose (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) pour la détermination de la limite de liquidité à un point font l'objet d'un ré-examen critique. Une simple méthode d'analyse à un point à l'aide d'un pénétromètre à cône pour le calcul de la limite de liquidité a été suggérée et comparée à d'autres méthodes. Les données expérimentales de Sherwood & Ryley (1970) ont été utilisées en vue de comparer différentes méthodes de pénétration par cône. En plus de considérations d'ordre purement statistque, les résultats montrent que les méthodes de détermination à un point constituent une base scientifique solide en raison du caractère unique de la ligne de courant modifiée, quel que soit le type de sol La ligne de courant normalisée est obtenue par la normalisation de la teneur en eau en faisant appel à des valeurs de limite de liquidité pour, de cette manière, annuler les effets des surfaces et des facteurs physico-chimiques associés qui sans cela se manifesteraient dans les différentes réponses au niveau macro.
Resumo:
Lower water availability coupled with labor shortage has resulted in the increasing inability of growers to cultivate puddled transplanted rice (PTR). A field study was conducted in the wet season of 2012 and dry season of 2013 to evaluate the performance of five rice establishment methods and four weed control treatments on weed management, and rice yield. Grass weeds were higher in dry-seeded rice (DSR) as compared to PTR and nonpuddled transplanted rice (NPTR). The highest total weed density (225-256plantsm-2) and total weed biomass (315-501gm-2) were recorded in DSR while the lowest (102-129plantsm-2 and 75-387gm-2) in PTR. Compared with the weedy plots, the treatment pretilachlor followed by fenoxaprop plus ethoxysulfuron plus 2,4-D provided excellent weed control. This treatment, however, had a poor performance in NPTR. In both seasons, herbicide efficacy was better in DSR and wet-seeded rice. PTR and DSR produced the maximum rice grain yields. The weed-free plots and herbicide treatments produced 84-614% and 58-504% higher rice grain yield, respectively, than the weedy plots in 2012, and a similar trend was observed in 2013.
Resumo:
A 4-degree-of-freedom single-input system and a 3-degree-of-freedom multi-input system are solved by the Coates', modified Coates' and Chan-Mai flowgraph methods. It is concluded that the Chan-Mai flowgraph method is superior to other flowgraph methods in such cases.
Resumo:
We review 20 studies that examined persuasive processing and outcomes of health messages using neurocognitive measures. The results suggest that cognitive processes and neural activity in regions thought to reflect self-related processing may be more prominent in the persuasive process of self-relevant messages. Furthermore, activity in the medial prefrontal cortex (MPFC), the superior temporal gyrus, and the middle frontal gyrus were identified as predictors of message effectiveness, with the MPFC accounting for additional variance in behaviour change beyond that accounted for by self-report measures. Incorporating neurocognitive measures may provide a more comprehensive understanding of the processing and outcomes of health messages.
Resumo:
This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.
Resumo:
It is well known that an integrable (in the sense of Arnold-Jost) Hamiltonian system gives rise to quasi-periodic motion with trajectories running on invariant tori. These tori foliate the whole phase space. If we perturb an integrable system, the Kolmogorow-Arnold-Moser (KAM) theorem states that, provided some non-degeneracy condition and that the perturbation is sufficiently small, most of the invariant tori carrying quasi-periodic motion persist, getting only slightly deformed. The measure of the persisting invariant tori is large together with the inverse of the size of the perturbation. In the first part of the thesis we shall use a Renormalization Group (RG) scheme in order to prove the classical KAM result in the case of a non analytic perturbation (the latter will only be assumed to have continuous derivatives up to a sufficiently large order). We shall proceed by solving a sequence of problems in which theperturbations are analytic approximations of the original one. We will finally show that the approximate solutions will converge to a differentiable solution of our original problem. In the second part we will use an RG scheme using continuous scales, so that instead of solving an iterative equation as in the classical RG KAM, we will end up solving a partial differential equation. This will allow us to reduce the complications of treating a sequence of iterative equations to the use of the Banach fixed point theorem in a suitable Banach space.
Resumo:
Genetics, the science of heredity and variation in living organisms, has a central role in medicine, in breeding crops and livestock, and in studying fundamental topics of biological sciences such as evolution and cell functioning. Currently the field of genetics is under a rapid development because of the recent advances in technologies by which molecular data can be obtained from living organisms. In order that most information from such data can be extracted, the analyses need to be carried out using statistical models that are tailored to take account of the particular genetic processes. In this thesis we formulate and analyze Bayesian models for genetic marker data of contemporary individuals. The major focus is on the modeling of the unobserved recent ancestry of the sampled individuals (say, for tens of generations or so), which is carried out by using explicit probabilistic reconstructions of the pedigree structures accompanied by the gene flows at the marker loci. For such a recent history, the recombination process is the major genetic force that shapes the genomes of the individuals, and it is included in the model by assuming that the recombination fractions between the adjacent markers are known. The posterior distribution of the unobserved history of the individuals is studied conditionally on the observed marker data by using a Markov chain Monte Carlo algorithm (MCMC). The example analyses consider estimation of the population structure, relatedness structure (both at the level of whole genomes as well as at each marker separately), and haplotype configurations. For situations where the pedigree structure is partially known, an algorithm to create an initial state for the MCMC algorithm is given. Furthermore, the thesis includes an extension of the model for the recent genetic history to situations where also a quantitative phenotype has been measured from the contemporary individuals. In that case the goal is to identify positions on the genome that affect the observed phenotypic values. This task is carried out within the Bayesian framework, where the number and the relative effects of the quantitative trait loci are treated as random variables whose posterior distribution is studied conditionally on the observed genetic and phenotypic data. In addition, the thesis contains an extension of a widely-used haplotyping method, the PHASE algorithm, to settings where genetic material from several individuals has been pooled together, and the allele frequencies of each pool are determined in a single genotyping.
Resumo:
Microarrays are high throughput biological assays that allow the screening of thousands of genes for their expression. The main idea behind microarrays is to compute for each gene a unique signal that is directly proportional to the quantity of mRNA that was hybridized on the chip. A large number of steps and errors associated with each step make the generated expression signal noisy. As a result, microarray data need to be carefully pre-processed before their analysis can be assumed to lead to reliable and biologically relevant conclusions. This thesis focuses on developing methods for improving gene signal and further utilizing this improved signal for higher level analysis. To achieve this, first, approaches for designing microarray experiments using various optimality criteria, considering both biological and technical replicates, are described. A carefully designed experiment leads to signal with low noise, as the effect of unwanted variations is minimized and the precision of the estimates of the parameters of interest are maximized. Second, a system for improving the gene signal by using three scans at varying scanner sensitivities is developed. A novel Bayesian latent intensity model is then applied on these three sets of expression values, corresponding to the three scans, to estimate the suitably calibrated true signal of genes. Third, a novel image segmentation approach that segregates the fluorescent signal from the undesired noise is developed using an additional dye, SYBR green RNA II. This technique helped in identifying signal only with respect to the hybridized DNA, and signal corresponding to dust, scratch, spilling of dye, and other noises, are avoided. Fourth, an integrated statistical model is developed, where signal correction, systematic array effects, dye effects, and differential expression, are modelled jointly as opposed to a sequential application of several methods of analysis. The methods described in here have been tested only for cDNA microarrays, but can also, with some modifications, be applied to other high-throughput technologies. Keywords: High-throughput technology, microarray, cDNA, multiple scans, Bayesian hierarchical models, image analysis, experimental design, MCMC, WinBUGS.
Resumo:
Bacteria play an important role in many ecological systems. The molecular characterization of bacteria using either cultivation-dependent or cultivation-independent methods reveals the large scale of bacterial diversity in natural communities, and the vastness of subpopulations within a species or genus. Understanding how bacterial diversity varies across different environments and also within populations should provide insights into many important questions of bacterial evolution and population dynamics. This thesis presents novel statistical methods for analyzing bacterial diversity using widely employed molecular fingerprinting techniques. The first objective of this thesis was to develop Bayesian clustering models to identify bacterial population structures. Bacterial isolates were identified using multilous sequence typing (MLST), and Bayesian clustering models were used to explore the evolutionary relationships among isolates. Our method involves the inference of genetic population structures via an unsupervised clustering framework where the dependence between loci is represented using graphical models. The population dynamics that generate such a population stratification were investigated using a stochastic model, in which homologous recombination between subpopulations can be quantified within a gene flow network. The second part of the thesis focuses on cluster analysis of community compositional data produced by two different cultivation-independent analyses: terminal restriction fragment length polymorphism (T-RFLP) analysis, and fatty acid methyl ester (FAME) analysis. The cluster analysis aims to group bacterial communities that are similar in composition, which is an important step for understanding the overall influences of environmental and ecological perturbations on bacterial diversity. A common feature of T-RFLP and FAME data is zero-inflation, which indicates that the observation of a zero value is much more frequent than would be expected, for example, from a Poisson distribution in the discrete case, or a Gaussian distribution in the continuous case. We provided two strategies for modeling zero-inflation in the clustering framework, which were validated by both synthetic and empirical complex data sets. We show in the thesis that our model that takes into account dependencies between loci in MLST data can produce better clustering results than those methods which assume independent loci. Furthermore, computer algorithms that are efficient in analyzing large scale data were adopted for meeting the increasing computational need. Our method that detects homologous recombination in subpopulations may provide a theoretical criterion for defining bacterial species. The clustering of bacterial community data include T-RFLP and FAME provides an initial effort for discovering the evolutionary dynamics that structure and maintain bacterial diversity in the natural environment.