17 resultados para APPROXIMATIONS

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis I examine one commonly used class of methods for the analytic approximation of cellular automata, the so-called local cluster approximations. This class subsumes the well known mean-field and pair approximations, as well as higher order generalizations of these. While a straightforward method known as Bayesian extension exists for constructing cluster approximations of arbitrary order on one-dimensional lattices (and certain other cases), for higher-dimensional systems the construction of approximations beyond the pair level becomes more complicated due to the presence of loops. In this thesis I describe the one-dimensional construction as well as a number of approximations suggested for higher-dimensional lattices, comparing them against a number of consistency criteria that such approximations could be expected to satisfy. I also outline a general variational principle for constructing consistent cluster approximations of arbitrary order with minimal bias, and show that the one-dimensional construction indeed satisfies this principle. Finally, I apply this variational principle to derive a novel consistent expression for symmetric three cell cluster frequencies as estimated from pair frequencies, and use this expression to construct a quantitatively improved pair approximation of the well-known lattice contact process on a hexagonal lattice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation is a theoretical study of finite-state based grammars used in natural language processing. The study is concerned with certain varieties of finite-state intersection grammars (FSIG) whose parsers define regular relations between surface strings and annotated surface strings. The study focuses on the following three aspects of FSIGs: (i) Computational complexity of grammars under limiting parameters In the study, the computational complexity in practical natural language processing is approached through performance-motivated parameters on structural complexity. Each parameter splits some grammars in the Chomsky hierarchy into an infinite set of subset approximations. When the approximations are regular, they seem to fall into the logarithmic-time hierarchyand the dot-depth hierarchy of star-free regular languages. This theoretical result is important and possibly relevant to grammar induction. (ii) Linguistically applicable structural representations Related to the linguistically applicable representations of syntactic entities, the study contains new bracketing schemes that cope with dependency links, left- and right branching, crossing dependencies and spurious ambiguity. New grammar representations that resemble the Chomsky-Schützenberger representation of context-free languages are presented in the study, and they include, in particular, representations for mildly context-sensitive non-projective dependency grammars whose performance-motivated approximations are linear time parseable. (iii) Compilation and simplification of linguistic constraints Efficient compilation methods for certain regular operations such as generalized restriction are presented. These include an elegant algorithm that has already been adopted as the approach in a proprietary finite-state tool. In addition to the compilation methods, an approach to on-the-fly simplifications of finite-state representations for parse forests is sketched. These findings are tightly coupled with each other under the theme of locality. I argue that the findings help us to develop better, linguistically oriented formalisms for finite-state parsing and to develop more efficient parsers for natural language processing. Avainsanat: syntactic parsing, finite-state automata, dependency grammar, first-order logic, linguistic performance, star-free regular approximations, mildly context-sensitive grammars

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The molecular level structure of mixtures of water and alcohols is very complicated and has been under intense research in the recent past. Both experimental and computational methods have been used in the studies. One method for studying the intra- and intermolecular bindings in the mixtures is the use of the so called difference Compton profiles, which are a way to obtain information about changes in the electron wave functions. In the process of Compton scattering a photon scatters inelastically from an electron. The Compton profile that is obtained from the electron wave functions is directly proportional to the probability of photon scattering at a given energy to a given solid angle. In this work we develop a method to compute Compton profiles numerically for mixtures of liquids. In order to obtain the electronic wave functions necessary to calculate the Compton profiles we need some statistical information about atomic coordinates. Acquiring this using ab-initio molecular dynamics is beyond our computational capabilities and therefore we use classical molecular dynamics to model the movement of atoms in the mixture. We discuss the validity of the chosen method in view of the results obtained from the simulations. There are some difficulties in using classical molecular dynamics for the quantum mechanical calculations, but these can possibly be overcome by parameter tuning. According to the calculations clear differences can be seen in the Compton profiles of different mixtures. This prediction needs to be tested in experiments in order to find out whether the approximations made are valid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the unanswered questions of modern cosmology is the issue of baryogenesis. Why does the universe contain a huge amount of baryons but no antibaryons? What kind of a mechanism can produce this kind of an asymmetry? One theory to explain this problem is leptogenesis. In the theory right-handed neutrinos with heavy Majorana masses are added to the standard model. This addition introduces explicit lepton number violation to the theory. Instead of producing the baryon asymmetry directly, these heavy neutrinos decay in the early universe. If these decays are CP-violating, then they produce lepton number. This lepton number is then partially converted to baryon number by the electroweak sphaleron process. In this work we start by reviewing the current observational data on the amount of baryons in the universe. We also introduce Sakharov's conditions, which are the necessary criteria for any theory of baryogenesis. We review the current data on neutrino oscillation, and explain why this requires the existence of neutrino mass. We introduce the different kinds of mass terms which can be added for neutrinos, and explain how the see-saw mechanism naturally explains the observed mass scales for neutrinos motivating the addition of the Majorana mass term. After introducing leptogenesis qualitatively, we derive the Boltzmann equations governing leptogenesis, and give analytical approximations for them. Finally we review the numerical solutions for these equations, demonstrating the capability of leptogenesis to explain the observed baryon asymmetry. In the appendix simple Feynman rules are given for theories with interactions between both Dirac- and Majorana-fermions and these are applied at the tree level to calculate the parameters relevant for the theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well known that an integrable (in the sense of Arnold-Jost) Hamiltonian system gives rise to quasi-periodic motion with trajectories running on invariant tori. These tori foliate the whole phase space. If we perturb an integrable system, the Kolmogorow-Arnold-Moser (KAM) theorem states that, provided some non-degeneracy condition and that the perturbation is sufficiently small, most of the invariant tori carrying quasi-periodic motion persist, getting only slightly deformed. The measure of the persisting invariant tori is large together with the inverse of the size of the perturbation. In the first part of the thesis we shall use a Renormalization Group (RG) scheme in order to prove the classical KAM result in the case of a non analytic perturbation (the latter will only be assumed to have continuous derivatives up to a sufficiently large order). We shall proceed by solving a sequence of problems in which theperturbations are analytic approximations of the original one. We will finally show that the approximate solutions will converge to a differentiable solution of our original problem. In the second part we will use an RG scheme using continuous scales, so that instead of solving an iterative equation as in the classical RG KAM, we will end up solving a partial differential equation. This will allow us to reduce the complications of treating a sequence of iterative equations to the use of the Banach fixed point theorem in a suitable Banach space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies homogeneous classes of complete metric spaces. Over the past few decades model theory has been extended to cover a variety of nonelementary frameworks. Shelah introduced the abstact elementary classes (AEC) in the 1980s as a common framework for the study of nonelementary classes. Another direction of extension has been the development of model theory for metric structures. This thesis takes a step in the direction of combining these two by introducing an AEC-like setting for studying metric structures. To find balance between generality and the possibility to develop stability theoretic tools, we work in a homogeneous context, thus extending the usual compact approach. The homogeneous context enables the application of stability theoretic tools developed in discrete homogeneous model theory. Using these we prove categoricity transfer theorems for homogeneous metric structures with respect to isometric isomorphisms. We also show how generalized isomorphisms can be added to the class, giving a model theoretic approach to, e.g., Banach space isomorphisms or operator approximations. The novelty is the built-in treatment of these generalized isomorphisms making, e.g., stability up to perturbation the natural stability notion. With respect to these generalized isomorphisms we develop a notion of independence. It behaves well already for structures which are omega-stable up to perturbation and coincides with the one from classical homogeneous model theory over saturated enough models. We also introduce a notion of isolation and prove dominance for it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Frictions are factors that hinder trading of securities in financial markets. Typical frictions include limited market depth, transaction costs, lack of infinite divisibility of securities, and taxes. Conventional models used in mathematical finance often gloss over these issues, which affect almost all financial markets, by arguing that the impact of frictions is negligible and, consequently, the frictionless models are valid approximations. This dissertation consists of three research papers, which are related to the study of the validity of such approximations in two distinct modeling problems. Models of price dynamics that are based on diffusion processes, i.e., continuous strong Markov processes, are widely used in the frictionless scenario. The first paper establishes that diffusion models can indeed be understood as approximations of price dynamics in markets with frictions. This is achieved by introducing an agent-based model of a financial market where finitely many agents trade a financial security, the price of which evolves according to price impacts generated by trades. It is shown that, if the number of agents is large, then under certain assumptions the price process of security, which is a pure-jump process, can be approximated by a one-dimensional diffusion process. In a slightly extended model, in which agents may exhibit herd behavior, the approximating diffusion model turns out to be a stochastic volatility model. Finally, it is shown that when agents' tendency to herd is strong, logarithmic returns in the approximating stochastic volatility model are heavy-tailed. The remaining papers are related to no-arbitrage criteria and superhedging in continuous-time option pricing models under small-transaction-cost asymptotics. Guasoni, Rásonyi, and Schachermayer have recently shown that, in such a setting, any financial security admits no arbitrage opportunities and there exist no feasible superhedging strategies for European call and put options written on it, as long as its price process is continuous and has the so-called conditional full support (CFS) property. Motivated by this result, CFS is established for certain stochastic integrals and a subclass of Brownian semistationary processes in the two papers. As a consequence, a wide range of possibly non-Markovian local and stochastic volatility models have the CFS property.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nucleation is the first step of the process by which gas molecules in the atmosphere condense to form liquid or solid particles. Despite the importance of atmospheric new-particle formation for both climate and health-related issues, little information exists on its precise molecular-level mechanisms. In this thesis, potential nucleation mechanisms involving sulfuric acid together with either water and ammonia or reactive biogenic molecules are studied using quantum chemical methods. Quantum chemistry calculations are based on the numerical solution of Schrödinger's equation for a system of atoms and electrons subject to various sets of approximations, the precise details of which give rise to a large number of model chemistries. A comparison of several different model chemistries indicates that the computational method must be chosen with care if accurate results for sulfuric acid - water - ammonia clusters are desired. Specifically, binding energies are incorrectly predicted by some popular density functionals, and vibrational anharmonicity must be accounted for if quantitatively reliable formation free energies are desired. The calculations reported in this thesis show that a combination of different high-level energy corrections and advanced thermochemical analysis can quantitatively replicate experimental results concerning the hydration of sulfuric acid. The role of ammonia in sulfuric acid - water nucleation was revealed by a series of calculations on molecular clusters of increasing size with respect to all three co-ordinates; sulfuric acid, water and ammonia. As indicated by experimental measurements, ammonia significantly assists the growth of clusters in the sulfuric acid - co-ordinate. The calculations presented in this thesis predict that in atmospheric conditions, this effect becomes important as the number of acid molecules increases from two to three. On the other hand, small molecular clusters are unlikely to contain more than one ammonia molecule per sulfuric acid. This implies that the average NH3:H2SO4 mole ratio of small molecular clusters in atmospheric conditions is likely to be between 1:3 and 1:1. Calculations on charged clusters confirm the experimental result that the HSO4- ion is much more strongly hydrated than neutral sulfuric acid. Preliminary calculations on HSO4- NH3 clusters indicate that ammonia is likely to play at most a minor role in ion-induced nucleation in the sulfuric acid - water system. Calculations of thermodynamic and kinetic parameters for the reaction of stabilized Criegee Intermediates with sulfuric acid demonstrate that quantum chemistry is a powerful tool for investigating chemically complicated nucleation mechanisms. The calculations indicate that if the biogenic Criegee Intermediates have sufficiently long lifetimes in atmospheric conditions, the studied reaction may be an important source of nucleation precursors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a measurement of the transverse momentum with respect to the jet axis (kt) of particles in jets produced in pp̅ collisions at √s=1.96  TeV. Results are obtained for charged particles in a cone of 0.5 radians around the jet axis in events with dijet invariant masses between 66 and 737  GeV/c2. The experimental data are compared to theoretical predictions obtained for fragmentation partons within the framework of resummed perturbative QCD using the modified leading log and next-to-modified leading log approximations. The comparison shows that trends in data are successfully described by the theoretical predictions, indicating that the perturbative QCD stage of jet fragmentation is dominant in shaping basic jet characteristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a measurement of the transverse momentum with respect to the jet axis ($k_{T}$) of particles in jets produced in $p\bar p$ collisions at $\sqrt{s}=1.96$ TeV. Results are obtained for charged particles within a cone of opening angle 0.5 radians around the jet axis in events with dijet invariant masses between 66 and 737 GeV/c$^{2}$. The experimental data are compared to theoretical predictions obtained for fragmentation partons within the framework of resummed perturbative QCD using the modified leading log and next-to-modified leading log approximations. The comparison shows that trends in data are successfully described by the theoretical predictions, indicating that the perturbative QCD stage of jet fragmentation is dominant in shaping basic jet characteristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thin films are the basis of much of recent technological advance, ranging from coatings with mechanical or optical benefits to platforms for nanoscale electronics. In the latter, semiconductors have been the norm ever since silicon became the main construction material for a multitude of electronical components. The array of characteristics of silicon-based systems can be widened by manipulating the structure of the thin films at the nanoscale - for instance, by making them porous. The different characteristics of different films can then to some extent be combined by simple superposition. Thin films can be manufactured using many different methods. One emerging field is cluster beam deposition, where aggregates of hundreds or thousands of atoms are deposited one by one to form a layer, the characteristics of which depend on the parameters of deposition. One critical parameter is deposition energy, which dictates how porous, if at all, the layer becomes. Other parameters, such as sputtering rate and aggregation conditions, have an effect on the size and consistency of the individual clusters. Understanding nanoscale processes, which cannot be observed experimentally, is fundamental to optimizing experimental techniques and inventing new possibilities for advances at this scale. Atomistic computer simulations offer a window to the world of nanometers and nanoseconds in a way unparalleled by the most accurate of microscopes. Transmission electron microscope image simulations can then bridge this gap by providing a tangible link between the simulated and the experimental. In this thesis, the entire process of cluster beam deposition is explored using molecular dynamics and image simulations. The process begins with the formation of the clusters, which is investigated for Si/Ge in an Ar atmosphere. The structure of the clusters is optimized to bring it as close to the experimental ideal as possible. Then, clusters are deposited, one by one, onto a substrate, until a sufficiently thick layer has been produced. Finally, the concept is expanded by further deposition with different parameters, resulting in multiple superimposed layers of different porosities. This work demonstrates how the aggregation of clusters is not entirely understood within the scope of the approximations used in the simulations; yet, it is also shown how the continued deposition of clusters with a varying deposition energy can lead to a novel kind of nanostructured thin film: a multielemental porous multilayer. According to theory, these new structures have characteristics that can be tailored for a variety of applications, with precision heretofore unseen in conventional multilayer manufacture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study evaluates three different time units in option pricing: trading time, calendar time and continuous time using discrete approximations (CTDA). The CTDA-time model partitions the trading day into 30-minute intervals, where each interval is given a weight corresponding to the historical volatility in the respective interval. Furthermore, the non-trading volatility, both overnight and weekend volatility, is included in the first interval of the trading day in the CTDA model. The three models are tested on market prices. The results indicate that the trading-time model gives the best fit to market prices in line with the results of previous studies, but contrary to expectations under non-arbitrage option pricing. Under non-arbitrage pricing, the option premium should reflect the cost of hedging the expected volatility during the option’s remaining life. The study concludes that the historical patterns in volatility are not fully accounted for by the market, rather the market prices options closer to trading time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examines the intraday and weekend volatility on the German DAX. The intraday volatility is partitioned into smaller intervals and compared to a whole day’s volatility. The estimated intraday variance is U-shaped and the weekend variance is estimated to 19 % of a normal trading day. The patterns in the intraday and weekend volatility are used to develop an extension to the Black and Scholes formula to form a new time basis. Calendar or trading days are commonly used for measuring time in option pricing. The Continuous Time using Discrete Approximations model (CTDA) developed in this study uses a measure of time with smaller intervals, approaching continuous time. The model presented accounts for the lapse of time during trading only. Arbitrage pricing suggests that the option price equals the expected cost of hedging volatility during the option’s remaining life. In this model, time is allowed to lapse as volatility occurs on an intraday basis. The measure of time is modified in CTDA to correct for the non-constant volatility and to account for the patterns in volatility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Lucianic text of the Septuagint of the Historical Books witnessed primarily by the manuscript group L (19, 82, 93, 108, and 127) consists of at least two strata: the recensional elements, which date back to about 300 C.E., and the substratum under these recensional elements, the proto-Lucianic text. Some distinctive readings in L seem to be supported by witnesses that antedate the supposed time of the recension. These witnesses include the biblical quotations of Josephus, Hippolytus, Irenaeus, Tertullian, and Cyprian, and the Old Latin translation of the Septuagint. It has also been posited that some Lucianic readings might go back to Hebrew readings that are not found in the Masoretic text but appear in the Qumran biblical texts. This phenomenon constitutes the proto-Lucianic problem. In chapter 1 the proto-Lucianic problem and its research history are introduced. Josephus references to 1 Samuel are analyzed in chapter 2. His agreements with L are few and are mostly only apparent or, at best, coincidental. In chapters 3 6 the quotations by four early Church Fathers are analyzed. Hippolytus Septuagint text is extremely hard to establish since his quotations from 1 Samuel have only been preserved in Armenian and Georgian translations. Most of the suggested agreements between Hippolytus and L are only apparent or coincidental. Irenaeus is the most trustworthy textual witness of the four early Church Fathers. His quotations from 1 Samuel agree with L several times against codex Vaticanus (B) and all or most of the other witnesses in preserving the original text. Tertullian and Cyprian agree with L in attesting some Hebraizing approximations that do not seem to be of Hexaplaric origin. The question is more likely of early Hebraizing readings of the same tradition as the kaige recension. In chapter 7 it is noted that Origen, although a pre-Lucianic Father, does not qualify as a proto-Lucianic witness. General observations about the Old Latin witnesses as well as an analysis of the manuscript La115 are given in chapter 8. In chapter 9 the theory of the proto-Lucianic recension is discussed. In order to demonstrate the existence of the proto-Lucianic recension one should find instances of indisputable agreement between the Qumran biblical manuscripts and L in readings that are secondary in Greek. No such case can be found in the Qumran material in 1 Samuel. In the text-historical conclusions (chapter 10) it is noted that of all the suggested proto-Lucianic agreements in 1 Samuel (about 75 plus 70 in La115) more than half are only apparent or, at best, coincidental. Of the indisputable agreements, however, 26 are agreements in the original reading. In about 20 instances the agreement is in a secondary reading. These agreements are early variants; mostly minor changes that happen all the time in the course of transmission. Four of the agreements, however, are in a pre-Hexaplaric Hebraizing approximation that has found its way independently into the pre-Lucianic witnesses and the Lucianic recension. The study aims at demonstrating the value of the Lucianic text as a textual witness: under the recensional layer(s) there is an ancient text that preserves very old, even original readings which have not been preserved in B and most of the other witnesses. The study also confirms the value of the early Church Fathers as textual witnesses.