31 resultados para DNA Error Correction
Resumo:
This study evaluates how the advection of precipitation, or wind drift, between the radar volume and ground affects radar measurements of precipitation. Normally precipitation is assumed to fall vertically to the ground from the contributing volume, and thus the radar measurement represents the geographical location immediately below. In this study radar measurements are corrected using hydrometeor trajectories calculated from measured and forecasted winds, and the effect of trajectory-correction on the radar measurements is evaluated. Wind drift statistics for Finland are compiled using sounding data from two weather stations spanning two years. For each sounding, the hydrometeor phase at ground level is estimated and drift distance calculated using different originating level heights. This way the drift statistics are constructed as a function of range from radar and elevation angle. On average, wind drift of 1 km was exceeded at approximately 60 km distance, while drift of 10 km was exceeded at 100 km distance. Trajectories were calculated using model winds in order to produce a trajectory-corrected ground field from radar PPI images. It was found that at the upwind side from the radar the effective measuring area was reduced as some trajectories exited the radar volume scan. In the downwind side areas near the edge of the radar measuring area experience improved precipitation detection. The effect of trajectory-correction is most prominent in instant measurements and diminishes when accumulating over longer time periods. Furthermore, measurements of intensive and small scale precipitation patterns benefit most from wind drift correction. The contribution of wind drift on the uncertainty of estimated Ze (S) - relationship was studied by simulating the effect of different error sources to the uncertainty in the relationship coefficients a and b. The overall uncertainty was assumed to consist of systematic errors of both the radar and the gauge, as well as errors by turbulence at the gauge orifice and by wind drift of precipitation. The focus of the analysis is error associated with wind drift, which was determined by describing the spatial structure of the reflectivity field using spatial autocovariance (or variogram). This spatial structure was then used with calculated drift distances to estimate the variance in radar measurement produced by precipitation drift, relative to the other error sources. It was found that error by wind drift was of similar magnitude with error by turbulence at gauge orifice at all ranges from radar, with systematic errors of the instruments being a minor issue. The correction method presented in the study could be used in radar nowcasting products to improve the estimation of visibility and local precipitation intensities. The method however only considers pure snow, and for operational purposes some improvements are desirable, such as melting layer detection, VPR correction and taking solid state hydrometeor type into account, which would improve the estimation of vertical velocities of the hydrometeors.
Resumo:
This thesis consists of two parts; in the first part we performed a single-molecule force extension measurement with 10kb long DNA-molecules from phage-λ to validate the calibration and single-molecule capability of our optical tweezers instrument. Fitting the worm-like chain interpolation formula to the data revealed that ca. 71% of the DNA tethers featured a contour length within ±15% of the expected value (3.38 µm). Only 25% of the found DNA had a persistence length between 30 and 60 nm. The correct value should be within 40 to 60 nm. In the second part we designed and built a precise temperature controller to remove thermal fluctuations that cause drifting of the optical trap. The controller uses feed-forward and PID (proportional-integral-derivative) feedback to achieve 1.58 mK precision and 0.3 K absolute accuracy. During a 5 min test run it reduced drifting of the trap from 1.4 nm/min in open-loop to 0.6 nm/min in closed-loop.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
Microarrays are high throughput biological assays that allow the screening of thousands of genes for their expression. The main idea behind microarrays is to compute for each gene a unique signal that is directly proportional to the quantity of mRNA that was hybridized on the chip. A large number of steps and errors associated with each step make the generated expression signal noisy. As a result, microarray data need to be carefully pre-processed before their analysis can be assumed to lead to reliable and biologically relevant conclusions. This thesis focuses on developing methods for improving gene signal and further utilizing this improved signal for higher level analysis. To achieve this, first, approaches for designing microarray experiments using various optimality criteria, considering both biological and technical replicates, are described. A carefully designed experiment leads to signal with low noise, as the effect of unwanted variations is minimized and the precision of the estimates of the parameters of interest are maximized. Second, a system for improving the gene signal by using three scans at varying scanner sensitivities is developed. A novel Bayesian latent intensity model is then applied on these three sets of expression values, corresponding to the three scans, to estimate the suitably calibrated true signal of genes. Third, a novel image segmentation approach that segregates the fluorescent signal from the undesired noise is developed using an additional dye, SYBR green RNA II. This technique helped in identifying signal only with respect to the hybridized DNA, and signal corresponding to dust, scratch, spilling of dye, and other noises, are avoided. Fourth, an integrated statistical model is developed, where signal correction, systematic array effects, dye effects, and differential expression, are modelled jointly as opposed to a sequential application of several methods of analysis. The methods described in here have been tested only for cDNA microarrays, but can also, with some modifications, be applied to other high-throughput technologies. Keywords: High-throughput technology, microarray, cDNA, multiple scans, Bayesian hierarchical models, image analysis, experimental design, MCMC, WinBUGS.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
This thesis presents methods for locating and analyzing cis-regulatory DNA elements involved with the regulation of gene expression in multicellular organisms. The regulation of gene expression is carried out by the combined effort of several transcription factor proteins collectively binding the DNA on the cis-regulatory elements. Only sparse knowledge of the 'genetic code' of these elements exists today. An automatic tool for discovery of putative cis-regulatory elements could help their experimental analysis, which would result in a more detailed view of the cis-regulatory element structure and function. We have developed a computational model for the evolutionary conservation of cis-regulatory elements. The elements are modeled as evolutionarily conserved clusters of sequence-specific transcription factor binding sites. We give an efficient dynamic programming algorithm that locates the putative cis-regulatory elements and scores them according to the conservation model. A notable proportion of the high-scoring DNA sequences show transcriptional enhancer activity in transgenic mouse embryos. The conservation model includes four parameters whose optimal values are estimated with simulated annealing. With good parameter values the model discriminates well between the DNA sequences with evolutionarily conserved cis-regulatory elements and the DNA sequences that have evolved neutrally. In further inquiry, the set of highest scoring putative cis-regulatory elements were found to be sensitive to small variations in the parameter values. The statistical significance of the putative cis-regulatory elements is estimated with the Two Component Extreme Value Distribution. The p-values grade the conservation of the cis-regulatory elements above the neutral expectation. The parameter values for the distribution are estimated by simulating the neutral DNA evolution. The conservation of the transcription factor binding sites can be used in the upstream analysis of regulatory interactions. This approach may provide mechanistic insight to the transcription level data from, e.g., microarray experiments. Here we give a method to predict shared transcriptional regulators for a set of co-expressed genes. The EEL (Enhancer Element Locator) software implements the method for locating putative cis-regulatory elements. The software facilitates both interactive use and distributed batch processing. We have used it to analyze the non-coding regions around all human genes with respect to the orthologous regions in various other species including mouse. The data from these genome-wide analyzes is stored in a relational database which is used in the publicly available web services for upstream analysis and visualization of the putative cis-regulatory elements in the human genome.
Resumo:
The object of this study is a tailless internal membrane-containing bacteriophage PRD1. It has a dsDNA genome with covalently bound terminal proteins required for replication. The uniqueness of the structure makes this phage a desirable object of research. PRD1 has been studied for some 30 years during which time a lot of information has accumulated on its structure and life-cycle. The two least characterised steps of the PRD1 life-cycle, the genome packaging and virus release are investigated here. PRD1 shares the main principles of virion assembly (DNA packaging in particular) and host cell lysis with other dsDNA bacteriophages. However, this phage has some fascinating individual peculiarities, such as DNA packaging into a membrane vesicle inside the capsid, absence of apparent portal protein, holin inhibitor and procapsid expansion. In the course of this study we have identified the components of the DNA packaging vertex of the capsid, and determined the function of protein P6 in packaging. We managed to purify the procapsids for an in vitro packaging system, optimise the reaction and significantly increase its efficiency. We developed a new method to determine DNA translocation and were able to quantify the efficiency and the rate of packaging. A model for PRD1 DNA packaging was also proposed. Another part of this study covers the lysis of the host cell. As other dsDNA bacteriophages PRD1 has been proposed to utilise a two-component lysis system. The existence of this lysis system in PRD1 has been proven by experiments using recombinant proteins and the multi-step nature of the lysis process has been established.
Resumo:
Extraintestinal pathogenic Escherichia coli (ExPEC) represent a diverse group of strains of E. coli, which infect extraintestinal sites, such as the urinary tract, the bloodstream, the meninges, the peritoneal cavity, and the lungs. Urinary tract infections (UTIs) caused by uropathogenic E. coli (UPEC), the major subgroup of ExPEC, are among the most prevalent microbial diseases world wide and a substantial burden for public health care systems. UTIs are responsible for serious morbidity and mortality in the elderly, in young children, and in immune-compromised and hospitalized patients. ExPEC strains are different, both from genetic and clinical perspectives, from commensal E. coli strains belonging to the normal intestinal flora and from intestinal pathogenic E. coli strains causing diarrhea. ExPEC strains are characterized by a broad range of alternate virulence factors, such as adhesins, toxins, and iron accumulation systems. Unlike diarrheagenic E. coli, whose distinctive virulence determinants evoke characteristic diarrheagenic symptoms and signs, ExPEC strains are exceedingly heterogeneous and are known to possess no specific virulence factors or a set of factors, which are obligatory for the infection of a certain extraintestinal site (e. g. the urinary tract). The ExPEC genomes are highly diverse mosaic structures in permanent flux. These strains have obtained a significant amount of DNA (predictably up to 25% of the genomes) through acquisition of foreign DNA from diverse related or non-related donor species by lateral transfer of mobile genetic elements, including pathogenicity islands (PAIs), plasmids, phages, transposons, and insertion elements. The ability of ExPEC strains to cause disease is mainly derived from this horizontally acquired gene pool; the extragenous DNA facilitates rapid adaptation of the pathogen to changing conditions and hence the extent of the spectrum of sites that can be infected. However, neither the amount of unique DNA in different ExPEC strains (or UPEC strains) nor the mechanisms lying behind the observed genomic mobility are known. Due to this extreme heterogeneity of the UPEC and ExPEC populations in general, the routine surveillance of ExPEC is exceedingly difficult. In this project, we presented a novel virulence gene algorithm (VGA) for the estimation of the extraintestinal virulence potential (VP, pathogenicity risk) of clinically relevant ExPECs and fecal E. coli isolates. The VGA was based on a DNA microarray specific for the ExPEC phenotype (ExPEC pathoarray). This array contained 77 DNA probes homologous with known (e.g. adhesion factors, iron accumulation systems, and toxins) and putative (e.g. genes predictably involved in adhesion, iron uptake, or in metabolic functions) ExPEC virulence determinants. In total, 25 of DNA probes homologous with known virulence factors and 36 of DNA probes representing putative extraintestinal virulence determinants were found at significantly higher frequency in virulent ExPEC isolates than in commensal E. coli strains. We showed that the ExPEC pathoarray and the VGA could be readily used for the differentiation of highly virulent ExPECs both from less virulent ExPEC clones and from commensal E. coli strains as well. Implementing the VGA in a group of unknown ExPECs (n=53) and fecal E. coli isolates (n=37), 83% of strains were correctly identified as extraintestinal virulent or commensal E. coli. Conversely, 15% of clinical ExPECs and 19% of fecal E. coli strains failed to raster into their respective pathogenic and non-pathogenic groups. Clinical data and virulence gene profiles of these strains warranted the estimated VPs; UPEC strains with atypically low risk-ratios were largely isolated from patients with certain medical history, including diabetes mellitus or catheterization, or from elderly patients. In addition, fecal E. coli strains with VPs characteristic for ExPEC were shown to represent the diagnostically important fraction of resident strains of the gut flora with a high potential of causing extraintestinal infections. Interestingly, a large fraction of DNA probes associated with the ExPEC phenotype corresponded to novel DNA sequences without any known function in UTIs and thus represented new genetic markers for the extraintestinal virulence. These DNA probes included unknown DNA sequences originating from the genomic subtractions of four clinical ExPEC isolates as well as from five novel cosmid sequences identified in the UPEC strains HE300 and JS299. The characterized cosmid sequences (pJS332, pJS448, pJS666, pJS700, and pJS706) revealed complex modular DNA structures with known and unknown DNA fragments arranged in a puzzle-like manner and integrated into the common E. coli genomic backbone. Furthermore, cosmid pJS332 of the UPEC strain HE300, which carried a chromosomal virulence gene cluster (iroBCDEN) encoding the salmochelin siderophore system, was shown to be part of a transmissible plasmid of Salmonella enterica. Taken together, the results of this project pointed towards the assumptions that first, (i) homologous recombination, even within coding genes, contributes to the observed mosaicism of ExPEC genomes and secondly, (ii) besides en block transfer of large DNA regions (e.g. chromosomal PAIs) also rearrangements of small DNA modules provide a means of genomic plasticity. The data presented in this project supplemented previous whole genome sequencing projects of E. coli and indicated that each E. coli genome displays a unique assemblage of individual mosaic structures, which enable these strains to successfully colonize and infect different anatomical sites.
Resumo:
Mutation and recombination are the fundamental processes leading to genetic variation in natural populations. This variation forms the raw material for evolution through natural selection and drift. Therefore, studying mutation rates may reveal information about evolutionary histories as well as phylogenetic interrelationships of organisms. In this thesis two molecular tools, DNA barcoding and the molecular clock were examined. In the first part, the efficiency of mutations to delineate closely related species was tested and the implications for conservation practices were assessed. The second part investigated the proposition that a constant mutation rate exists within invertebrates, in form of a metabolic-rate dependent molecular clock, which can be applied to accurately date speciation events. DNA barcoding aspires to be an efficient technique to not only distinguish between species but also reveal population-level variation solely relying on mutations found on a short stretch of a single gene. In this thesis barcoding was applied to discriminate between Hylochares populations from Russian Karelia and new Hylochares findings from the greater Helsinki region in Finland. Although barcoding failed to delineate the two reproductively isolated groups, their distinct morphological features and differing life-history traits led to their classification as two closely related, although separate species. The lack of genetic differentiation appears to be due to a recent divergence event not yet reflected in the beetles molecular make-up. Thus, the Russian Hylochares was described as a new species. The Finnish species, previously considered as locally extinct, was recognized as endangered. Even if, due to their identical genetic make-up, the populations had been regarded as conspecific, conservation strategies based on prior knowledge from Russia would not have guaranteed the survival of the Finnish beetle. Therefore, new conservation actions based on detailed studies of the biology and life-history of the Finnish Hylochares were conducted to protect this endemic rarity in Finland. The idea behind the strict molecular clock is that mutation rates are constant over evolutionary time and may thus be used to infer species divergence dates. However, one of the most recent theories argues that a strict clock does not tick per unit of time but that it has a constant substitution rate per unit of mass-specific metabolic energy. Therefore, according to this hypothesis, molecular clocks have to be recalibrated taking body size and temperature into account. This thesis tested the temperature effect on mutation rates in equally sized invertebrates. For the first dataset (family Eucnemidae, Coleoptera) the phylogenetic interrelationships and evolutionary history of the genus Arrhipis had to be inferred before the influence of temperature on substitution rates could be studied. Further, a second, larger invertebrate dataset (family Syrphidae, Diptera) was employed. Several methodological approaches, a number of genes and multiple molecular clock models revealed that there was no consistent relationship between temperature and mutation rate for the taxa under study. Thus, the body size effect, observed in vertebrates but controversial for invertebrates, rather than temperature may be the underlying driving force behind the metabolic-rate dependent molecular clock. Therefore, the metabolic-rate dependent molecular clock does not hold for the here studied invertebrate groups. This thesis emphasizes that molecular techniques relying on mutation rates have to be applied with caution. Whereas they may work satisfactorily under certain conditions for specific taxa, they may fail for others. The molecular clock as well as DNA barcoding should incorporate all the information and data available to obtain comprehensive estimations of the existing biodiversity and its evolutionary history.
Resumo:
Thesis focuses on mutations of POLG1 gene encoding catalytic subunit polγ-α of mitochondrial DNA polymerase gamma holoenzyme (polG) and the association of mutations with different clinical phenotypes. In addition, particular defective mutant variants of the protein were characterized biochemically in vitro. PolG-holoenzyme is the sole DNA polymerase found in mitochondria. It is involved in replication and repair of the mitochondrial genome, mtDNA. Holoenzyme also includes the accessory subunit polγ-β, which is required for the enhanced processivity of polγ-α. Defective polγ-α causes accumulation of secondary mutations on mtDNA, which leads to a defective oxidative phosphorylation system. The clinical consequences of such mutations are variable, affecting nervous system, skeletal muscles, liver and other post-mitotic tissues. The aims of the studies included: 1) Determination of the role of POLG1 mutations in neurological syndromes with features of mitochondrial dysfunction and an unknown molecular cause. 2) Development and set up of diagnostic tests for routine clinical purposes. 3) Biochemical characterization of the functional consequences of the identified polγ-α variants. Studies describe new neurological phenotypes in addition to PEO caused by POLG1 mutations, including parkinsonism, premature amenorrhea, ataxia and Parkinson s disease (PD). POLG1 mutations and polymorphisms are both common and/or potential genetic risk factors at least among the Finnish population. The major findings and applications reported here are: 1) POLG1 mutations cause parkinsonism and premature menopause in PEO families in either a recessive or a dominant manner. 2) A common recessive POLG1 mutations (A467T and W748S) in the homozygous state causes severe adult or juvenile-onset ataxia without muscular symptoms or histological or mtDNA abnormalities in muscles. 3) A common recessive pathogenic change A467T can also cause a mild dominant disease in heterozygote carriers. 4) The A467T variant shows reduced polymerase activity due to defective template binding. 5) Rare polyglutamine tract length variants of POLG1 are significantly enriched in Finnish idiopathic Parkinson s disease patients. 6) Dominant mutations are clearly restricted to the highly conserved polymerase domain motifs, whereas recessive ones are more evenly distributed along the protein. The present results highlight and confirm the new role of mitochondria in parkinsonism/Parkinson s disease and describe a new mitochondrial ataxia. Based on these results, a POLG1 diagnostic routine has been set up in Helsinki University Central Hospital (HUSLAB).
Resumo:
Visual acuities at the time of referral and on the day before surgery were compared in 124 patients operated on for cataract in Vaasa Central Hospital, Finland. Preoperative visual acuity and the occurrence of ocular and general disease were compared in samples of consecutive cataract extractions performed in 1982, 1985, 1990, 1995 and 2000 in two hospitals in the Vaasa region in Finland. The repeatability and standard deviation of random measurement error in visual acuity and refractive error determination in a clinical environment in cataractous, pseudophakic and healthy eyes were estimated by re-examining visual acuity and refractive error of patients referred to cataract surgery or consultation by ophthalmic professionals. Altogether 99 eyes of 99 persons (41 cataractous, 36 pseudophakic and 22 healthy eyes) with a visual acuity range of Snellen 0.3 to 1.3 (0.52 to -0.11 logMAR) were examined. During an average waiting time of 13 months, visual acuity in the study eye decreased from 0.68 logMAR to 0.96 logMAR (from 0.2 to 0.1 in Snellen decimal values). The average decrease in vision was 0.27 logMAR per year. In the fastest quartile, visual acuity change per year was 0.75 logMAR, and in the second fastest 0.29 logMAR, the third and fourth quartiles were virtually unaffected. From 1982 to 2000, the incidence of cataract surgery increased from 1.0 to 7.2 operations per 1000 inhabitants per year in the Vaasa region. The average preoperative visual acuity in the operated eye increased by 0.85 logMAR (in decimal values from 0.03to 0.2) and in the better eye 0.27 logMAR (in decimal values from 0.23 to 0.43) over this period. The proportion of patients profoundly visually handicapped (VA in the better eye <0.1) before the operation fell from 15% to 4%, and that of patients less profoundly visually handicapped (VA in the better eye 0.1 to <0.3) from 47% to 15%. The repeatability visual acuity measurement estimated as a coefficient of repeatability for all 99 eyes was ±0.18 logMAR, and the standard deviation of measurement error was 0.06 logMAR. Eyes with the lowest visual acuity (0.3-0.45) had the largest variability, the coefficient of repeatability values being ±0.24 logMAR and eyes with a visual acuity of 0.7 or better had the smallest, ±0.12 logMAR. The repeatability of refractive error measurement was studied in the same patient material as the repeatability of visual acuity. Differences between measurements 1 and 2 were calculated as three-dimensional vector values and spherical equivalents and expressed by coefficients of repeatability. Coefficients of repeatability for all eyes for vertical, torsional and horisontal vectors were ±0.74D, ±0.34D and ±0.93D, respectively, and for spherical equivalent for all eyes ±0.74D. Eyes with lower visual acuity (0.3-0.45) had larger variability in vector and spherical equivalent values (±1.14), but the difference between visual acuity groups was not statistically significant. The difference in the mean defocus equivalent between measurements 1 and 2 was, however, significantly greater in the lower visual acuity group. If a change of ±0.5D (measured in defocus equivalents) is accepted as a basis for change of spectacles for eyes with good vision, the basis for eyes in the visual acuity range of 0.3 - 0.65 would be ±1D. Differences in repeated visual acuity measurements are partly explained by errors in refractive error measurements.
Resumo:
Accurate and stable time series of geodetic parameters can be used to help in understanding the dynamic Earth and its response to global change. The Global Positioning System, GPS, has proven to be invaluable in modern geodynamic studies. In Fennoscandia the first GPS networks were set up in 1993. These networks form the basis of the national reference frames in the area, but they also provide long and important time series for crustal deformation studies. These time series can be used, for example, to better constrain the ice history of the last ice age and the Earth s structure, via existing glacial isostatic adjustment models. To improve the accuracy and stability of the GPS time series, the possible nuisance parameters and error sources need to be minimized. We have analysed GPS time series to study two phenomena. First, we study the refraction in the neutral atmosphere of the GPS signal, and, second, we study the surface loading of the crust by environmental factors, namely the non-tidal Baltic Sea, atmospheric load and varying continental water reservoirs. We studied the atmospheric effects on the GPS time series by comparing the standard method to slant delays derived from a regional numerical weather model. We have presented a method for correcting the atmospheric delays at the observational level. The results show that both standard atmosphere modelling and the atmospheric delays derived from a numerical weather model by ray-tracing provide a stable solution. The advantage of the latter is that the number of unknowns used in the computation decreases and thus, the computation may become faster and more robust. The computation can also be done with any processing software that allows the atmospheric correction to be turned off. The crustal deformation due to loading was computed by convolving Green s functions with surface load data, that is to say, global hydrology models, global numerical weather models and a local model for the Baltic Sea. The result was that the loading factors can be seen in the GPS coordinate time series. Reducing the computed deformation from the vertical time series of GPS coordinates reduces the scatter of the time series; however, the long term trends are not influenced. We show that global hydrology models and the local sea surface can explain up to 30% of the GPS time series variation. On the other hand atmospheric loading admittance in the GPS time series is low, and different hydrological surface load models could not be validated in the present study. In order to be used for GPS corrections in the future, both atmospheric loading and hydrological models need further analysis and improvements.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.