35 resultados para vector error correction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of three articles on passive vector fields in turbulence. The vector fields interact with a turbulent velocity field, which is described by the Kraichnan model. The effect of the Kraichnan model on the passive vectors is studied via an equation for the pair correlation function and its solutions. The first paper is concerned with the passive magnetohydrodynamic equations. Emphasis is placed on the so called "dynamo effect", which in the present context is understood as an unbounded growth of the pair correlation function. The exact analytical conditions for such growth are found in the cases of zero and infinite Prandtl numbers. The second paper contains an extensive study of a number of passive vector models. Emphasis is now on the properties of the (assumed) steady state, namely anomalous scaling, anisotropy and small and large scale behavior with different types of forcing or stirring. The third paper is in many ways a completion to the previous one in its study of the steady state existence problem. Conditions for the existence of the steady state are found in terms of the spatial roughness parameter of the turbulent velocity field.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate and stable time series of geodetic parameters can be used to help in understanding the dynamic Earth and its response to global change. The Global Positioning System, GPS, has proven to be invaluable in modern geodynamic studies. In Fennoscandia the first GPS networks were set up in 1993. These networks form the basis of the national reference frames in the area, but they also provide long and important time series for crustal deformation studies. These time series can be used, for example, to better constrain the ice history of the last ice age and the Earth s structure, via existing glacial isostatic adjustment models. To improve the accuracy and stability of the GPS time series, the possible nuisance parameters and error sources need to be minimized. We have analysed GPS time series to study two phenomena. First, we study the refraction in the neutral atmosphere of the GPS signal, and, second, we study the surface loading of the crust by environmental factors, namely the non-tidal Baltic Sea, atmospheric load and varying continental water reservoirs. We studied the atmospheric effects on the GPS time series by comparing the standard method to slant delays derived from a regional numerical weather model. We have presented a method for correcting the atmospheric delays at the observational level. The results show that both standard atmosphere modelling and the atmospheric delays derived from a numerical weather model by ray-tracing provide a stable solution. The advantage of the latter is that the number of unknowns used in the computation decreases and thus, the computation may become faster and more robust. The computation can also be done with any processing software that allows the atmospheric correction to be turned off. The crustal deformation due to loading was computed by convolving Green s functions with surface load data, that is to say, global hydrology models, global numerical weather models and a local model for the Baltic Sea. The result was that the loading factors can be seen in the GPS coordinate time series. Reducing the computed deformation from the vertical time series of GPS coordinates reduces the scatter of the time series; however, the long term trends are not influenced. We show that global hydrology models and the local sea surface can explain up to 30% of the GPS time series variation. On the other hand atmospheric loading admittance in the GPS time series is low, and different hydrological surface load models could not be validated in the present study. In order to be used for GPS corrections in the future, both atmospheric loading and hydrological models need further analysis and improvements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I: Parkinson’s disease is a slowly progressive neurodegenerative disorder in which particularly the dopaminergic neurons of the substantia nigra pars compacta degenerate and die. Current conventional treatment is based on restraining symptoms but it has no effect on the progression of the disease. Gene therapy research has focused on the possibility of restoring the lost brain function by at least two means: substitution of critical enzymes needed for the synthesis of dopamine and slowing down the progression of the disease by supporting the functions of the remaining nigral dopaminergic neurons by neurotrophic factors. The striatal levels of enzymes such as tyrosine hydroxylase, dopadecarboxylase and GTP-CH1 are decreased as the disease progresses. By replacing one or all of the enzymes, dopamine levels in the striatum may be restored to normal and behavioral impairments caused by the disease may be ameliorated especially in the later stages of the disease. The neurotrophic factors glial cell derived neurotrophic factor (GDNF) and neurturin have shown to protect and restore functions of dopaminergic cell somas and terminals as well as improve behavior in animal lesion models. This therapy may be best suited at the early stages of the disease when there are more dopaminergic neurons for neurotrophic factors to reach. Viral vector-mediated gene transfer provides a tool to deliver proteins with complex structures into specific brain locations and provides long-term protein over-expression. Part II: The aim of our study was to investigate the effects of two orally dosed COMT inhibitors entacapone (10 and 30 mg/kg) and tolcapone (10 and 30 mg/kg) with a subsequent administration of a peripheral dopadecarboxylase inhibitor carbidopa (30 mg/kg) and L- dopa (30 mg/kg) on dopamine and its metabolite levels in the dorsal striatum and nucleus accumbens of freely moving rats using dual-probe in vivo microdialysis. Earlier similarly designed studies have only been conducted in the dorsal striatum. We also confirmed the result of earlier ex vivo studies regarding the effects of intraperitoneally dosed tolcapone (30 mg/kg) and entacapone (30 mg/kg) on striatal and hepatic COMT activity. The results obtained from the dorsal striatum were generally in line with earlier studies, where tolcapone tended to increase dopamine and DOPAC levels and decrease HVA levels. Entacapone tended to keep striatal dopamine and HVA levels elevated longer than in controls and also tended to elevate the levels of DOPAC. Surprisingly in the nucleus accumbens, dopamine levels after either dose of entacapone or tolcapone were not elevated. Accumbal DOPAC levels, especially in the tolcapone 30 mg/kg group, were elevated nearly to the same extent as measured in the dorsal striatum. Entacapone 10 mg/kg elevated accumbal HVA levels more than the dose of 30 mg/kg and the effect was more pronounced in the nucleus accumbens than in the dorsal striatum. This suggests that entacapone 30 mg/kg has minor central effects. Also our ex vivo study results obtained from the dorsal striatum suggest that entacapone 30 mg/kg has minor and transient central effects, even though central HVA levels were not suppressed below those of the control group in either brain area in the microdialysis study. Both entacapone and tolcapone suppressed hepatic COMT activity more than striatal COMT activity. Tolcapone was more effective than entacapone in the dorsal striatum. The differences between dopamine and its metabolite levels in the dorsal striatum and nucleus accumbens may be due to different properties of the two brain areas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the first observation in hadronic collisions of the electroweak production of vector boson pairs (VV, V=W, Z) where one boson decays to a dijet final state. The data correspond to 3.5  fb-1 of integrated luminosity of pp̅ collisions at √s=1.96  TeV collected by the CDF II detector at the Fermilab Tevatron. We observe 1516±239(stat)±144(syst) diboson candidate events and measure a cross section σ(pp̅ →VV+X) of 18.0±2.8(stat)±2.4(syst)±1.1(lumi)  pb, in agreement with the expectations of the standard model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the first observation in hadronic collisions of the electroweak production of vector boson pairs (VV, V=W,Z) where one boson decays to a dijet final state . The data correspond to 3.5 inverse femtobarns of integrated luminosity of ppbar collisions at sqrt(s)=1.96 TeV collected by the CDFII detector at the Fermilab Tevatron. We observe 1516+/-239(stat)+/-144(syst) diboson candidate events and measure a cross section sigma(ppbar->VV+X) of 18.0+/-2.8(stat)+/-2.4(syst)+/-1.1(lumi) pb, in agreement with the expectations of the standard model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the result of a search for a massive color-octet vector particle, (e.g. a massive gluon) decaying to a pair of top quarks in proton-antiproton collisions with a center-of-mass energy of 1.96 TeV. This search is based on 1.9 fb$^{-1}$ of data collected using the CDF detector during Run II of the Tevatron at Fermilab. We study $t\bar{t}$ events in the lepton+jets channel with at least one $b$-tagged jet. A massive gluon is characterized by its mass, decay width, and the strength of its coupling to quarks. These parameters are determined according to the observed invariant mass distribution of top quark pairs. We set limits on the massive gluon coupling strength for masses between 400 and 800 GeV$/c^2$ and width-to-mass ratios between 0.05 and 0.50. The coupling strength of the hypothetical massive gluon to quarks is consistent with zero within the explored parameter space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although the first procedure in a seeing human eye using excimer laser was reported in 1988 (McDonald et al. 1989, O'Connor et al. 2006) just three studies (Kymionis et al. 2007, O'Connor et al. 2006, Rajan et al. 2004) with a follow-up over ten years had been published when this thesis was started. The present thesis aims to investigate 1) the long-term outcomes of excimer laser refractive surgery performed for myopia and/or astigmatism by photorefractive keratectomy (PRK) and laser-in situ- keratomileusis (LASIK), 2) the possible differences in postoperative outcomes and complications when moderate-to-high astigmatism is treated with PRK or LASIK, 3) the presence of irregular astigmatism that depend exclusively on the corneal epithelium, and 4) the role of corneal nerve recovery in corneal wound healing in PRK enhancement. Our results revealed that in long-term the number of eyes that achieved uncorrected visual acuity (UCVA)≤0.0 and ≤0.5 (logMAR) was higher after PRK than after LASIK. Postoperative stability was slightly better after PRK than after LASIK. In LASIK treated eyes the incidence of myopic regression was more pronounced when the intended correction was over >6.0 D and in patients aged <30 years.Yet the intended corrections in our study were higher for LASIK than for PRK eyes. No differences were found in percentages of eyes with best corrected visual acuity (BCVA) or loss of two or more lines of visual acuity between PRK and LASIK in the long-term. The postoperative long-term outcomes of PRK with two different delivery systems broad beam and scanning laser were compared and revealed no differences. Postoperative outcomes of moderate-to-high astigmatism yielded better results in terms of UCVA and less compromise or loss of two more lines of BCVA after LASIK that after PRK.Similar stability for both procedures was revealed. Vector analysis showed that LASIK outcomes tended to be more accurate than PRK outcomes, yet no statistically differences were found. Irregular astigmatism secondary to recurrent corneal erosion due to map-dot-fingerprint was successfully treated with phototherapeutic keratectomy (PTK). Preoperative videokeratographies (VK) showed irregular astigmatism. However, postoperatively, all eyes showed a regular pattern. No correlation was found between pre- and postoperative VK patterns. Postoperative outcomes of late PRK in eyes originally subjected to LASIK showed that all (7/7) eyes achieved UCVA ≤0.5 at last follow-up (range 3 — 11 months), and no eye lost lines of BCVA. Postoperatively all eyes developed and initial mild haze (0.5 — 1) into the first month. Yet, at last follow-up 5/7 eyes showed a haze of 0.5 and this was no longer evident in 2/7 eyes. Based on these results, we demonstrated that the long-term outcomes after PRK and LASIK were safe and efficient, with similar stability for both procedures. The PRK outcomes were similar when treated by broad-beam or scanning slit laser. LASIK was better than PRK to correct moderate-to-high astigmatism, yet both procedures showed a tendency of undercorrection. Irregular astigmatism was proven to be able to depend exclusively from the corneal epithelium. If this kind of astigmatism is present in the cornea and a customized PRK/LASIK correction is done based on wavefront measurements an irregular astigmatism may be produced rather than treated. Corneal sensory nerve recovery should have an important role in the modulation of the corneal wound healing and post-operative anterior stromal scarring. PRK enhancement may be an option in eyes with previous LASIK after a sufficient time interval that in at least 2 years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).