999 resultados para Methods : Miscellaneous


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The driving force behind this study has been the need to develop and apply methods for investigating the hydrogeochemical processes of significance to water management and artificial groundwater recharge. Isotope partitioning of elements in the course of physicochemical processes produces isotopic variations to their natural reservoirs. Tracer property of the stable isotope abundances of oxygen, hydrogen and carbon has been applied to investigate hydrogeological processes in Finland. The work described here has initiated the use of stable isotope methods to achieve a better understanding of these processes in the shallow glacigenic formations of Finland. In addition, the regional precipitation and groundwater records will supplement the data of global precipitation, but as importantly, provide primary background data for hydrological studies. The isotopic composition of oxygen and hydrogen in Finnish groundwaters and atmospheric precipitation was determined in water samples collected during 1995 2005. Prior to this study, no detailed records existed on the spatial or annual variability of the isotopic composition of precipitation or groundwaters in Finland. Groundwaters and precipitation in Finland display a distinct spatial distribution of the isotopic ratios of oxygen and hydrogen. The depletion of the heavier isotopes as a function of increasing latitude is closely related to the local mean surface temperature. No significant differences were observed between the mean annual isotope ratios of oxygen and hydrogen in precipitation and those in local groundwaters. These results suggest that the link between the spatial variability in the isotopic composition of precipitation and local temperature is preserved in groundwaters. Artificial groundwater recharge to glaciogenic sedimentary formations offers many possibilities to apply the isotopic ratios of oxygen, hydrogen and carbon as natural isotopic tracers. In this study the systematics of dissolved carbon have been investigated in two geochemically different glacigenic groundwater formations: a typical esker aquifer at Tuusula, in southern Finland and a carbonate-bearing aquifer with a complex internal structure at Virttaankangas, in southwest Finland. Reducing the concentration of dissolved organic carbon (DOC) in water is a primary challenge in the process of artificial groundwater recharge. The carbon isotope method was used to as a tool to trace the role of redox processes in the decomposition of DOC. At the Tuusula site, artificial recharge leads to a significant decrease in the organic matter content of the infiltrated water. In total, 81% of the initial DOC present in the infiltrated water was removed in three successive stages of subsurface processes. Three distinct processes in the reduction of the DOC content were traced: The decomposition of dissolved organic carbon in the first stage of subsurface flow appeared to be the most significant part in DOC removal, whereas further decrease in DOC has been attributed to adsorption and finally to dilution with local groundwater. Here, isotope methods were used for the first time to quantify the processes of DOC removal in an artificial groundwater recharge. Groundwaters in the Virttaankangas aquifer are characterized by high pH values exceeding 9, which are exceptional for shallow aquifers on glaciated crystalline bedrock. The Virttaankangas sediments were discovered to contain trace amounts of fine grained, dispersed calcite, which has a high tendency to increase the pH of local groundwaters. Understanding the origin of the unusual geochemistry of the Virttaankangas groundwaters is an important issue for constraining the operation of the future artificial groundwater plant. The isotope ratios of oxygen and carbon in sedimentary carbonate minerals have been successfully applied to constrain the origin of the dispersed calcite in the Virttaankangas sediments. The isotopic and chemical characteristics of the groundwater in the distinct units of aquifer were observed to vary depending on the aquifer mineralogy, groundwater residence time and the openness of the system to soil CO2. The high pH values of > 9 have been related to dissolution of calcite into groundwater under closed or nearly closed system conditions relative to soil CO2, at a low partial pressure of CO2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context Most studies assess pollination success at capsule maturity, and studies of pre-zygotic processes are often lacking. Aims This study investigates the suitability of controlled pollination for a potential forestry plantation species, Eucalyptus argophloia, by examining pre- and post-zygotic pollination success. Methods Pollen tube development, capsule set and seed set are compared following three-stop pollination, artificially induced protogyny (AIP), AIP unpollinated and open pollination. The fecundity of stored pollen was compared with that of fresh pollen. Results Three-stop pollination, AIP and open pollination had similar numbers of pollen tubes, but AIP unpollinated had none. Open pollination produced significantly more capsules and total number of seeds than the other treatments. There were significantly more seeds per retained capsule for the open pollination and three-stop pollination treatments than for the AIP and AIP unpollinated pollination treatments. There were no significant differences relative to the age of pollen. Conclusions Pre-zygotic success in terms of pollen tubes was similar for open-pollinated, three stop and AIP, but was not reflected in post-zygotic success when the open pollination and three-stop method produced significantly more seeds per retained capsule than the AIP treatments and open pollination yielded more seeds. Capsule set and total seed set for open pollination, and fewer capsules in controlled pollinations, may reflect physical damage to buds because of the small E. argophloia flowers. Suitable alternative breeding strategies other than controlled pollinations are discussed for this species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Paper deals with the analysis of liquid limit of soils, an inferential parameter of universal acceptance. It has been undertaken primarily to re-examine one-point methods of determination of liquid limit water contents. It has been shown by basic characteristics of soils and associated physico-chemical factors that critical shear strengths at liquid limit water contents arise out of force field equilibrium and are independent of soil type. This leads to the formation of a scientific base for liquid limit determination by one-point methods, which hitherto was formulated purely on statistical analysis of data. Available methods (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) of one-point liquid limit determination have been critically re-examined. A simple one-point cone penetrometer method of computing liquid limit has been suggested and compared with other methods. Experimental data of Sherwood & Ryley (1970) have been employed for comparison of different cone penetration methods. Results indicate that, apart from mere statistical considerations, one-point methods have a strong scientific base on the uniqueness of modified flow line irrespective of soil type. Normalized flow line is obtained by normalization of water contents by liquid limit values thereby nullifying the effects of surface areas and associated physico-chemical factors that are otherwise reflected in different responses at macrolevel.Cet article traite de l'analyse de la limite de liquidité des sols, paramètre déductif universellement accepté. Cette analyse a été entreprise en premier lieu pour ré-examiner les méthodes à un point destinées à la détermination de la teneur en eau à la limite de liquidité. Il a été démontré par les caractéristiques fondamentales de sols et par des facteurs physico-chimiques associés que les résistances critiques à la rupture au cisaillement pour des teneurs en eau à la limite de liquidité résultent de l'équilibre des champs de forces et sont indépendantes du type de sol concerné. On peut donc constituer une base scientifique pour la détermination de la limite de liquidité par des méthodes à un point lesquelles, jusqu'alors, n'avaient été formulées que sur la base d'une analyse statistique des données. Les méthodes dont on dispose (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) pour la détermination de la limite de liquidité à un point font l'objet d'un ré-examen critique. Une simple méthode d'analyse à un point à l'aide d'un pénétromètre à cône pour le calcul de la limite de liquidité a été suggérée et comparée à d'autres méthodes. Les données expérimentales de Sherwood & Ryley (1970) ont été utilisées en vue de comparer différentes méthodes de pénétration par cône. En plus de considérations d'ordre purement statistque, les résultats montrent que les méthodes de détermination à un point constituent une base scientifique solide en raison du caractère unique de la ligne de courant modifiée, quel que soit le type de sol La ligne de courant normalisée est obtenue par la normalisation de la teneur en eau en faisant appel à des valeurs de limite de liquidité pour, de cette manière, annuler les effets des surfaces et des facteurs physico-chimiques associés qui sans cela se manifesteraient dans les différentes réponses au niveau macro.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lower water availability coupled with labor shortage has resulted in the increasing inability of growers to cultivate puddled transplanted rice (PTR). A field study was conducted in the wet season of 2012 and dry season of 2013 to evaluate the performance of five rice establishment methods and four weed control treatments on weed management, and rice yield. Grass weeds were higher in dry-seeded rice (DSR) as compared to PTR and nonpuddled transplanted rice (NPTR). The highest total weed density (225-256plantsm-2) and total weed biomass (315-501gm-2) were recorded in DSR while the lowest (102-129plantsm-2 and 75-387gm-2) in PTR. Compared with the weedy plots, the treatment pretilachlor followed by fenoxaprop plus ethoxysulfuron plus 2,4-D provided excellent weed control. This treatment, however, had a poor performance in NPTR. In both seasons, herbicide efficacy was better in DSR and wet-seeded rice. PTR and DSR produced the maximum rice grain yields. The weed-free plots and herbicide treatments produced 84-614% and 58-504% higher rice grain yield, respectively, than the weedy plots in 2012, and a similar trend was observed in 2013.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A 4-degree-of-freedom single-input system and a 3-degree-of-freedom multi-input system are solved by the Coates', modified Coates' and Chan-Mai flowgraph methods. It is concluded that the Chan-Mai flowgraph method is superior to other flowgraph methods in such cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We review 20 studies that examined persuasive processing and outcomes of health messages using neurocognitive measures. The results suggest that cognitive processes and neural activity in regions thought to reflect self-related processing may be more prominent in the persuasive process of self-relevant messages. Furthermore, activity in the medial prefrontal cortex (MPFC), the superior temporal gyrus, and the middle frontal gyrus were identified as predictors of message effectiveness, with the MPFC accounting for additional variance in behaviour change beyond that accounted for by self-report measures. Incorporating neurocognitive measures may provide a more comprehensive understanding of the processing and outcomes of health messages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is well known that an integrable (in the sense of Arnold-Jost) Hamiltonian system gives rise to quasi-periodic motion with trajectories running on invariant tori. These tori foliate the whole phase space. If we perturb an integrable system, the Kolmogorow-Arnold-Moser (KAM) theorem states that, provided some non-degeneracy condition and that the perturbation is sufficiently small, most of the invariant tori carrying quasi-periodic motion persist, getting only slightly deformed. The measure of the persisting invariant tori is large together with the inverse of the size of the perturbation. In the first part of the thesis we shall use a Renormalization Group (RG) scheme in order to prove the classical KAM result in the case of a non analytic perturbation (the latter will only be assumed to have continuous derivatives up to a sufficiently large order). We shall proceed by solving a sequence of problems in which theperturbations are analytic approximations of the original one. We will finally show that the approximate solutions will converge to a differentiable solution of our original problem. In the second part we will use an RG scheme using continuous scales, so that instead of solving an iterative equation as in the classical RG KAM, we will end up solving a partial differential equation. This will allow us to reduce the complications of treating a sequence of iterative equations to the use of the Banach fixed point theorem in a suitable Banach space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetics, the science of heredity and variation in living organisms, has a central role in medicine, in breeding crops and livestock, and in studying fundamental topics of biological sciences such as evolution and cell functioning. Currently the field of genetics is under a rapid development because of the recent advances in technologies by which molecular data can be obtained from living organisms. In order that most information from such data can be extracted, the analyses need to be carried out using statistical models that are tailored to take account of the particular genetic processes. In this thesis we formulate and analyze Bayesian models for genetic marker data of contemporary individuals. The major focus is on the modeling of the unobserved recent ancestry of the sampled individuals (say, for tens of generations or so), which is carried out by using explicit probabilistic reconstructions of the pedigree structures accompanied by the gene flows at the marker loci. For such a recent history, the recombination process is the major genetic force that shapes the genomes of the individuals, and it is included in the model by assuming that the recombination fractions between the adjacent markers are known. The posterior distribution of the unobserved history of the individuals is studied conditionally on the observed marker data by using a Markov chain Monte Carlo algorithm (MCMC). The example analyses consider estimation of the population structure, relatedness structure (both at the level of whole genomes as well as at each marker separately), and haplotype configurations. For situations where the pedigree structure is partially known, an algorithm to create an initial state for the MCMC algorithm is given. Furthermore, the thesis includes an extension of the model for the recent genetic history to situations where also a quantitative phenotype has been measured from the contemporary individuals. In that case the goal is to identify positions on the genome that affect the observed phenotypic values. This task is carried out within the Bayesian framework, where the number and the relative effects of the quantitative trait loci are treated as random variables whose posterior distribution is studied conditionally on the observed genetic and phenotypic data. In addition, the thesis contains an extension of a widely-used haplotyping method, the PHASE algorithm, to settings where genetic material from several individuals has been pooled together, and the allele frequencies of each pool are determined in a single genotyping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Microarrays are high throughput biological assays that allow the screening of thousands of genes for their expression. The main idea behind microarrays is to compute for each gene a unique signal that is directly proportional to the quantity of mRNA that was hybridized on the chip. A large number of steps and errors associated with each step make the generated expression signal noisy. As a result, microarray data need to be carefully pre-processed before their analysis can be assumed to lead to reliable and biologically relevant conclusions. This thesis focuses on developing methods for improving gene signal and further utilizing this improved signal for higher level analysis. To achieve this, first, approaches for designing microarray experiments using various optimality criteria, considering both biological and technical replicates, are described. A carefully designed experiment leads to signal with low noise, as the effect of unwanted variations is minimized and the precision of the estimates of the parameters of interest are maximized. Second, a system for improving the gene signal by using three scans at varying scanner sensitivities is developed. A novel Bayesian latent intensity model is then applied on these three sets of expression values, corresponding to the three scans, to estimate the suitably calibrated true signal of genes. Third, a novel image segmentation approach that segregates the fluorescent signal from the undesired noise is developed using an additional dye, SYBR green RNA II. This technique helped in identifying signal only with respect to the hybridized DNA, and signal corresponding to dust, scratch, spilling of dye, and other noises, are avoided. Fourth, an integrated statistical model is developed, where signal correction, systematic array effects, dye effects, and differential expression, are modelled jointly as opposed to a sequential application of several methods of analysis. The methods described in here have been tested only for cDNA microarrays, but can also, with some modifications, be applied to other high-throughput technologies. Keywords: High-throughput technology, microarray, cDNA, multiple scans, Bayesian hierarchical models, image analysis, experimental design, MCMC, WinBUGS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large-scale chromosome rearrangements such as copy number variants (CNVs) and inversions encompass a considerable proportion of the genetic variation between human individuals. In a number of cases, they have been closely linked with various inheritable diseases. Single-nucleotide polymorphisms (SNPs) are another large part of the genetic variance between individuals. They are also typically abundant and their measuring is straightforward and cheap. This thesis presents computational means of using SNPs to detect the presence of inversions and deletions, a particular variety of CNVs. Technically, the inversion-detection algorithm detects the suppressed recombination rate between inverted and non-inverted haplotype populations whereas the deletion-detection algorithm uses the EM-algorithm to estimate the haplotype frequencies of a window with and without a deletion haplotype. As a contribution to population biology, a coalescent simulator for simulating inversion polymorphisms has been developed. Coalescent simulation is a backward-in-time method of modelling population ancestry. Technically, the simulator also models multiple crossovers by using the Counting model as the chiasma interference model. Finally, this thesis includes an experimental section. The aforementioned methods were tested on synthetic data to evaluate their power and specificity. They were also applied to the HapMap Phase II and Phase III data sets, yielding a number of candidates for previously unknown inversions, deletions and also correctly detecting known such rearrangements.