18 resultados para Phase space methods

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well known that an integrable (in the sense of Arnold-Jost) Hamiltonian system gives rise to quasi-periodic motion with trajectories running on invariant tori. These tori foliate the whole phase space. If we perturb an integrable system, the Kolmogorow-Arnold-Moser (KAM) theorem states that, provided some non-degeneracy condition and that the perturbation is sufficiently small, most of the invariant tori carrying quasi-periodic motion persist, getting only slightly deformed. The measure of the persisting invariant tori is large together with the inverse of the size of the perturbation. In the first part of the thesis we shall use a Renormalization Group (RG) scheme in order to prove the classical KAM result in the case of a non analytic perturbation (the latter will only be assumed to have continuous derivatives up to a sufficiently large order). We shall proceed by solving a sequence of problems in which theperturbations are analytic approximations of the original one. We will finally show that the approximate solutions will converge to a differentiable solution of our original problem. In the second part we will use an RG scheme using continuous scales, so that instead of solving an iterative equation as in the classical RG KAM, we will end up solving a partial differential equation. This will allow us to reduce the complications of treating a sequence of iterative equations to the use of the Banach fixed point theorem in a suitable Banach space.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hamiltonian systems in stellar and planetary dynamics are typically near integrable. For example, Solar System planets are almost in two-body orbits, and in simulations of the Galaxy, the orbits of stars seem regular. For such systems, sophisticated numerical methods can be developed through integrable approximations. Following this theme, we discuss three distinct problems. We start by considering numerical integration techniques for planetary systems. Perturbation methods (that utilize the integrability of the two-body motion) are preferred over conventional "blind" integration schemes. We introduce perturbation methods formulated with Cartesian variables. In our numerical comparisons, these are superior to their conventional counterparts, but, by definition, lack the energy-preserving properties of symplectic integrators. However, they are exceptionally well suited for relatively short-term integrations in which moderately high positional accuracy is required. The next exercise falls into the category of stability questions in solar systems. Traditionally, the interest has been on the orbital stability of planets, which have been quantified, e.g., by Liapunov exponents. We offer a complementary aspect by considering the protective effect that massive gas giants, like Jupiter, can offer to Earth-like planets inside the habitable zone of a planetary system. Our method produces a single quantity, called the escape rate, which characterizes the system of giant planets. We obtain some interesting results by computing escape rates for the Solar System. Galaxy modelling is our third and final topic. Because of the sheer number of stars (about 10^11 in Milky Way) galaxies are often modelled as smooth potentials hosting distributions of stars. Unfortunately, only a handful of suitable potentials are integrable (harmonic oscillator, isochrone and Stäckel potential). This severely limits the possibilities of finding an integrable approximation for an observed galaxy. A solution to this problem is torus construction; a method for numerically creating a foliation of invariant phase-space tori corresponding to a given target Hamiltonian. Canonically, the invariant tori are constructed by deforming the tori of some existing integrable toy Hamiltonian. Our contribution is to demonstrate how this can be accomplished by using a Stäckel toy Hamiltonian in ellipsoidal coordinates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The output of a laser is a high frequency propagating electromagnetic field with superior coherence and brightness compared to that emitted by thermal sources. A multitude of different types of lasers exist, which also translates into large differences in the properties of their output. Moreover, the characteristics of the electromagnetic field emitted by a laser can be influenced from the outside, e.g., by injecting an external optical field or by optical feedback. In the case of free-running solitary class-B lasers, such as semiconductor and Nd:YVO4 solid-state lasers, the phase space is two-dimensional, the dynamical variables being the population inversion and the amplitude of the electromagnetic field. The two-dimensional structure of the phase space means that no complex dynamics can be found. If a class-B laser is perturbed from its steady state, then the steady state is restored after a short transient. However, as discussed in part (i) of this Thesis, the static properties of class-B lasers, as well as their artificially or noise induced dynamics around the steady state, can be experimentally studied in order to gain insight on laser behaviour, and to determine model parameters that are not known ab initio. In this Thesis particular attention is given to the linewidth enhancement factor, which describes the coupling between the gain and the refractive index in the active material. A highly desirable attribute of an oscillator is stability, both in frequency and amplitude. Nowadays, however, instabilities in coupled lasers have become an active area of research motivated not only by the interesting complex nonlinear dynamics but also by potential applications. In part (ii) of this Thesis the complex dynamics of unidirectionally coupled, i.e., optically injected, class-B lasers is investigated. An injected optical field increases the dimensionality of the phase space to three by turning the phase of the electromagnetic field into an important variable. This has a radical effect on laser behaviour, since very complex dynamics, including chaos, can be found in a nonlinear system with three degrees of freedom. The output of the injected laser can be controlled in experiments by varying the injection rate and the frequency of the injected light. In this Thesis the dynamics of unidirectionally coupled semiconductor and Nd:YVO4 solid-state lasers is studied numerically and experimentally.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The cross section for jets from b quarks produced with a W boson has been measured in ppbar collision data from 1.9/fb of integrated luminosity recorded by the CDF II detector at the Tevatron. The W+b-jets process poses a significant background in measurements of top quark production and prominent searches for the Higgs boson. We measure a b-jet cross section of 2.74 +- 0.27(stat.) +- 0.42(syst.) pb in association with a single flavor of leptonic W boson decay over a limited kinematic phase space. This measured result cannot be accommodated in several available theoretical predictions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We show that the dynamical Wigner functions for noninteracting fermions and bosons can have complex singularity structures with a number of new solutions accompanying the usual mass-shell dispersion relations. These new shell solutions are shown to encode the information of the quantum coherence between particles and antiparticles, left and right moving chiral states and/or between different flavour states. Analogously to the usual derivation of the Boltzmann equation, we impose this extended phase space structure on the full interacting theory. This extension of the quasiparticle approximation gives rise to a self-consistent equation of motion for a density matrix that combines the quantum mechanical coherence evolution with a well defined collision integral giving rise to decoherence. Several applications of the method are given, for example to the coherent particle production, electroweak baryogenesis and study of decoherence and thermalization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a measurement of the top quark mass with t-tbar dilepton events produced in p-pbar collisions at the Fermilab Tevatron $\sqrt{s}$=1.96 TeV and collected by the CDF II detector. A sample of 328 events with a charged electron or muon and an isolated track, corresponding to an integrated luminosity of 2.9 fb$^{-1}$, are selected as t-tbar candidates. To account for the unconstrained event kinematics, we scan over the phase space of the azimuthal angles ($\phi_{\nu_1},\phi_{\nu_2}$) of neutrinos and reconstruct the top quark mass for each $\phi_{\nu_1},\phi_{\nu_2}$ pair by minimizing a $\chi^2$ function in the t-tbar dilepton hypothesis. We assign $\chi^2$-dependent weights to the solutions in order to build a preferred mass for each event. Preferred mass distributions (templates) are built from simulated t-tbar and background events, and parameterized in order to provide continuous probability density functions. A likelihood fit to the mass distribution in data as a weighted sum of signal and background probability density functions gives a top quark mass of $165.5^{+{3.4}}_{-{3.3}}$(stat.)$\pm 3.1$(syst.) GeV/$c^2$.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The feasibility of different modern analytical techniques for the mass spectrometric detection of anabolic androgenic steroids (AAS) in human urine was examined in order to enhance the prevalent analytics and to find reasonable strategies for effective sports drug testing. A comparative study of the sensitivity and specificity between gas chromatography (GC) combined with low (LRMS) and high resolution mass spectrometry (HRMS) in screening of AAS was carried out with four metabolites of methandienone. Measurements were done in selected ion monitoring mode with HRMS using a mass resolution of 5000. With HRMS the detection limits were considerably lower than with LRMS, enabling detection of steroids at low 0.2-0.5 ng/ml levels. However, also with HRMS, the biological background hampered the detection of some steroids. The applicability of liquid-phase microextraction (LPME) was studied with metabolites of fluoxymesterone, 4-chlorodehydromethyltestosterone, stanozolol and danazol. Factors affecting the extraction process were studied and a novel LPME method with in-fiber silylation was developed and validated for GC/MS analysis of the danazol metabolite. The method allowed precise, selective and sensitive analysis of the metabolite and enabled simultaneous filtration, extraction, enrichment and derivatization of the analyte from urine without any other steps in sample preparation. Liquid chromatographic/tandem mass spectrometric (LC/MS/MS) methods utilizing electrospray ionization (ESI), atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) were developed and applied for detection of oxandrolone and metabolites of stanozolol and 4-chlorodehydromethyltestosterone in urine. All methods exhibited high sensitivity and specificity. ESI showed, however, the best applicability, and a LC/ESI-MS/MS method for routine screening of nine 17-alkyl-substituted AAS was thus developed enabling fast and precise measurement of all analytes with detection limits below 2 ng/ml. The potential of chemometrics to resolve complex GC/MS data was demonstrated with samples prepared for AAS screening. Acquired full scan spectral data (m/z 40-700) were processed by the OSCAR algorithm (Optimization by Stepwise Constraints of Alternating Regression). The deconvolution process was able to dig out from a GC/MS run more than the double number of components as compared with the number of visible chromatographic peaks. Severely overlapping components, as well as components hidden in the chromatographic background could be isolated successfully. All studied techniques proved to be useful analytical tools to improve detection of AAS in urine. Superiority of different procedures is, however, compound-dependent and different techniques complement each other.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solid materials can exist in different physical structures without a change in chemical composition. This phenomenon, known as polymorphism, has several implications on pharmaceutical development and manufacturing. Various solid forms of a drug can possess different physical and chemical properties, which may affect processing characteristics and stability, as well as the performance of a drug in the human body. Therefore, knowledge and control of the solid forms is fundamental to maintain safety and high quality of pharmaceuticals. During manufacture, harsh conditions can give rise to unexpected solid phase transformations and therefore change the behavior of the drug. Traditionally, pharmaceutical production has relied on time-consuming off-line analysis of production batches and finished products. This has led to poor understanding of processes and drug products. Therefore, new powerful methods that enable real time monitoring of pharmaceuticals during manufacturing processes are greatly needed. The aim of this thesis was to apply spectroscopic techniques to solid phase analysis within different stages of drug development and manufacturing, and thus, provide a molecular level insight into the behavior of active pharmaceutical ingredients (APIs) during processing. Applications to polymorph screening and different unit operations were developed and studied. A new approach to dissolution testing, which involves simultaneous measurement of drug concentration in the dissolution medium and in-situ solid phase analysis of the dissolving sample, was introduced and studied. Solid phase analysis was successfully performed during different stages, enabling a molecular level insight into the occurring phenomena. Near-infrared (NIR) spectroscopy was utilized in screening of polymorphs and processing-induced transformations (PITs). Polymorph screening was also studied with NIR and Raman spectroscopy in tandem. Quantitative solid phase analysis during fluidized bed drying was performed with in-line NIR and Raman spectroscopy and partial least squares (PLS) regression, and different dehydration mechanisms were studied using in-situ spectroscopy and partial least squares discriminant analysis (PLS-DA). In-situ solid phase analysis with Raman spectroscopy during dissolution testing enabled analysis of dissolution as a whole, and provided a scientific explanation for changes in the dissolution rate. It was concluded that the methods applied and studied provide better process understanding and knowledge of the drug products, and therefore, a way to achieve better quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose This study focused on craft from a standpoint of phenomenological philosophy and craft was interpreted through Maurice Merleau-Ponty’s phenomenology of the body. The main focus was the physical phase of the craft process, wherein a product is made from material. The aim was to interpret corporality in craft. There is no former research focusing on lived body in craft science. Physical, bodily making is inalienable in craft, but it is not articulated. Recent discussion has focused on craft as ”whole”, which emphasizes designing part in the process, and craft becomes conceptualized with the theories of art and design. The axiomatic yet silenced basis of craft, corporality, deserves to become examined as well. That is why this study answers the questions: how craft manifests in the light of phenomenology of the body and what is corporality in craft? Methods In this study I cultivated a phenomenological attitude and turned my exploring eye on craft ”in itself”. In addition I restrained myself from mere making and placed myself looking at the occurrence of craft to describe it verbally. I read up Maurice Merleau-Ponty’s phenomenology of the body on his principal work (2002) and former interpretations of it. Interpreting and understanding textual data were based on Gadamer’s hermeneutics, and the four-pronged composition of the study followed Koski’s (1995) version of the Gadamerian process of textual interpretation. Conclusions In the construction of bodily phenomenology craft was to be contemplated as a mutual relationship between the maker and the world materializing in bodily making. At the moment of making a human being becomes one with his craft, and the connection between the maker, material and the equipment appears as communication. Operational dimension was distinctive in the intentionality of craft, which operates in many ways, also in craft products. The synesthesia and synergy of craft were emphasized and craft as bodily practice came to life through them. The moment of making appeared as situation generating time and space, where throwing oneself into making may give the maker an experience of upraise beyond the dualism of mind and body. The conception of the implicit nature of craft knowledge was strengthened. In the light of interpretation it was possible to conceptualize craft as a performance and making ”in itself” as a work of art. In that case craft appeared as bodily expression, which as an experience approaches art without being it after all. The concept of aesthetic was settled into making as well. Bodily and phenomenological viewpoint on craft gave material to critically contemplate the concept of “whole craft” (kokonainen käsityö) and provided different kind of understanding of craft as making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis examines the feasibility of a forest inventory method based on two-phase sampling in estimating forest attributes at the stand or substand levels for forest management purposes. The method is based on multi-source forest inventory combining auxiliary data consisting of remote sensing imagery or other geographic information and field measurements. Auxiliary data are utilized as first-phase data for covering all inventory units. Various methods were examined for improving the accuracy of the forest estimates. Pre-processing of auxiliary data in the form of correcting the spectral properties of aerial imagery was examined (I), as was the selection of aerial image features for estimating forest attributes (II). Various spatial units were compared for extracting image features in a remote sensing aided forest inventory utilizing very high resolution imagery (III). A number of data sources were combined and different weighting procedures were tested in estimating forest attributes (IV, V). Correction of the spectral properties of aerial images proved to be a straightforward and advantageous method for improving the correlation between the image features and the measured forest attributes. Testing different image features that can be extracted from aerial photographs (and other very high resolution images) showed that the images contain a wealth of relevant information that can be extracted only by utilizing the spatial organization of the image pixel values. Furthermore, careful selection of image features for the inventory task generally gives better results than inputting all extractable features to the estimation procedure. When the spatial units for extracting very high resolution image features were examined, an approach based on image segmentation generally showed advantages compared with a traditional sample plot-based approach. Combining several data sources resulted in more accurate estimates than any of the individual data sources alone. The best combined estimate can be derived by weighting the estimates produced by the individual data sources by the inverse values of their mean square errors. Despite the fact that the plot-level estimation accuracy in two-phase sampling inventory can be improved in many ways, the accuracy of forest estimates based mainly on single-view satellite and aerial imagery is a relatively poor basis for making stand-level management decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetics, the science of heredity and variation in living organisms, has a central role in medicine, in breeding crops and livestock, and in studying fundamental topics of biological sciences such as evolution and cell functioning. Currently the field of genetics is under a rapid development because of the recent advances in technologies by which molecular data can be obtained from living organisms. In order that most information from such data can be extracted, the analyses need to be carried out using statistical models that are tailored to take account of the particular genetic processes. In this thesis we formulate and analyze Bayesian models for genetic marker data of contemporary individuals. The major focus is on the modeling of the unobserved recent ancestry of the sampled individuals (say, for tens of generations or so), which is carried out by using explicit probabilistic reconstructions of the pedigree structures accompanied by the gene flows at the marker loci. For such a recent history, the recombination process is the major genetic force that shapes the genomes of the individuals, and it is included in the model by assuming that the recombination fractions between the adjacent markers are known. The posterior distribution of the unobserved history of the individuals is studied conditionally on the observed marker data by using a Markov chain Monte Carlo algorithm (MCMC). The example analyses consider estimation of the population structure, relatedness structure (both at the level of whole genomes as well as at each marker separately), and haplotype configurations. For situations where the pedigree structure is partially known, an algorithm to create an initial state for the MCMC algorithm is given. Furthermore, the thesis includes an extension of the model for the recent genetic history to situations where also a quantitative phenotype has been measured from the contemporary individuals. In that case the goal is to identify positions on the genome that affect the observed phenotypic values. This task is carried out within the Bayesian framework, where the number and the relative effects of the quantitative trait loci are treated as random variables whose posterior distribution is studied conditionally on the observed genetic and phenotypic data. In addition, the thesis contains an extension of a widely-used haplotyping method, the PHASE algorithm, to settings where genetic material from several individuals has been pooled together, and the allele frequencies of each pool are determined in a single genotyping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large-scale chromosome rearrangements such as copy number variants (CNVs) and inversions encompass a considerable proportion of the genetic variation between human individuals. In a number of cases, they have been closely linked with various inheritable diseases. Single-nucleotide polymorphisms (SNPs) are another large part of the genetic variance between individuals. They are also typically abundant and their measuring is straightforward and cheap. This thesis presents computational means of using SNPs to detect the presence of inversions and deletions, a particular variety of CNVs. Technically, the inversion-detection algorithm detects the suppressed recombination rate between inverted and non-inverted haplotype populations whereas the deletion-detection algorithm uses the EM-algorithm to estimate the haplotype frequencies of a window with and without a deletion haplotype. As a contribution to population biology, a coalescent simulator for simulating inversion polymorphisms has been developed. Coalescent simulation is a backward-in-time method of modelling population ancestry. Technically, the simulator also models multiple crossovers by using the Counting model as the chiasma interference model. Finally, this thesis includes an experimental section. The aforementioned methods were tested on synthetic data to evaluate their power and specificity. They were also applied to the HapMap Phase II and Phase III data sets, yielding a number of candidates for previously unknown inversions, deletions and also correctly detecting known such rearrangements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first observations of solar X-rays date back to late 1940 s. In order to observe solar X-rays the instruments have to be lifted above the Earth s atmosphere, since all high energy radiation from the space is almost totally attenuated by it. This is a good thing for all living creatures, but bad for X-ray astronomers. Detectors observing X-ray emission from space must be placed on-board satellites, which makes this particular discipline of astronomy technologically and operationally demanding, as well as very expensive. In this thesis, I have focused on detectors dedicated to observing solar X-rays in the energy range 1-20 keV. The purpose of these detectors was to measure solar X-rays simultaneously with another X-ray spectrometer measuring fluorescence X-ray emission from the Moon surface. The X-ray fluorescence emission is induced by the primary solar X-rays. If the elemental abundances on the Moon were to be determined with fluorescence analysis methods, the shape and intensity of the simultaneous solar X-ray spectrum must be known. The aim of this thesis is to describe the characterization and operation of our X-ray instruments on-board two Moon missions, SMART-1 and Chandrayaan-1. Also the independent solar science performance of these two almost similar X-ray spectrometers is described. These detectors have the following two features in common. Firstly, the primary detection element is made of a single crystal silicon diode. Secondly, the field of view is circular and very large. The data obtained from these detectors are spectra with a 16 second time resolution. Before launching an instrument into space, its performance must be characterized by ground calibrations. The basic operation of these detectors and their ground calibrations are described in detail. Two C-flares are analyzed as examples for introducing the spectral fitting process. The first flare analysis shows the fit of a single spectrum of the C1-flare obtained during the peak phase. The other analysis example shows how to derive the time evolution of fluxes, emission measures (EM) and temperatures through the whole single C4 flare with the time resolution of 16 s. The preparatory data analysis procedures are also introduced in detail. These are required in spectral fittings of the data. A new solar monitor design equipped with a concentrator optics and a moderate size of field of view is also introduced.