21 resultados para random phase approximation

em Helda - Digital Repository of University of Helsinki


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the dynamical Wigner functions for noninteracting fermions and bosons can have complex singularity structures with a number of new solutions accompanying the usual mass-shell dispersion relations. These new shell solutions are shown to encode the information of the quantum coherence between particles and antiparticles, left and right moving chiral states and/or between different flavour states. Analogously to the usual derivation of the Boltzmann equation, we impose this extended phase space structure on the full interacting theory. This extension of the quasiparticle approximation gives rise to a self-consistent equation of motion for a density matrix that combines the quantum mechanical coherence evolution with a well defined collision integral giving rise to decoherence. Several applications of the method are given, for example to the coherent particle production, electroweak baryogenesis and study of decoherence and thermalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Light scattering, or scattering and absorption of electromagnetic waves, is an important tool in all remote-sensing observations. In astronomy, the light scattered or absorbed by a distant object can be the only source of information. In Solar-system studies, the light-scattering methods are employed when interpreting observations of atmosphereless bodies such as asteroids, atmospheres of planets, and cometary or interplanetary dust. Our Earth is constantly monitored from artificial satellites at different wavelengths. With remote sensing of Earth the light-scattering methods are not the only source of information: there is always the possibility to make in situ measurements. The satellite-based remote sensing is, however, superior in the sense of speed and coverage if only the scattered signal can be reliably interpreted. The optical properties of many industrial products play a key role in their quality. Especially for products such as paint and paper, the ability to obscure the background and to reflect light is of utmost importance. High-grade papers are evaluated based on their brightness, opacity, color, and gloss. In product development, there is a need for computer-based simulation methods that could predict the optical properties and, therefore, could be used in optimizing the quality while reducing the material costs. With paper, for instance, pilot experiments with an actual paper machine can be very time- and resource-consuming. The light-scattering methods presented in this thesis solve rigorously the interaction of light and material with wavelength-scale structures. These methods are computationally demanding, thus the speed and accuracy of the methods play a key role. Different implementations of the discrete-dipole approximation are compared in the thesis and the results provide practical guidelines in choosing a suitable code. In addition, a novel method is presented for the numerical computations of orientation-averaged light-scattering properties of a particle, and the method is compared against existing techniques. Simulation of light scattering for various targets and the possible problems arising from the finite size of the model target are discussed in the thesis. Scattering by single particles and small clusters is considered, as well as scattering in particulate media, and scattering in continuous media with porosity or surface roughness. Various techniques for modeling the scattering media are presented and the results are applied to optimizing the structure of paper. However, the same methods can be applied in light-scattering studies of Solar-system regoliths or cometary dust, or in any remote-sensing problem involving light scattering in random media with wavelength-scale structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solid materials can exist in different physical structures without a change in chemical composition. This phenomenon, known as polymorphism, has several implications on pharmaceutical development and manufacturing. Various solid forms of a drug can possess different physical and chemical properties, which may affect processing characteristics and stability, as well as the performance of a drug in the human body. Therefore, knowledge and control of the solid forms is fundamental to maintain safety and high quality of pharmaceuticals. During manufacture, harsh conditions can give rise to unexpected solid phase transformations and therefore change the behavior of the drug. Traditionally, pharmaceutical production has relied on time-consuming off-line analysis of production batches and finished products. This has led to poor understanding of processes and drug products. Therefore, new powerful methods that enable real time monitoring of pharmaceuticals during manufacturing processes are greatly needed. The aim of this thesis was to apply spectroscopic techniques to solid phase analysis within different stages of drug development and manufacturing, and thus, provide a molecular level insight into the behavior of active pharmaceutical ingredients (APIs) during processing. Applications to polymorph screening and different unit operations were developed and studied. A new approach to dissolution testing, which involves simultaneous measurement of drug concentration in the dissolution medium and in-situ solid phase analysis of the dissolving sample, was introduced and studied. Solid phase analysis was successfully performed during different stages, enabling a molecular level insight into the occurring phenomena. Near-infrared (NIR) spectroscopy was utilized in screening of polymorphs and processing-induced transformations (PITs). Polymorph screening was also studied with NIR and Raman spectroscopy in tandem. Quantitative solid phase analysis during fluidized bed drying was performed with in-line NIR and Raman spectroscopy and partial least squares (PLS) regression, and different dehydration mechanisms were studied using in-situ spectroscopy and partial least squares discriminant analysis (PLS-DA). In-situ solid phase analysis with Raman spectroscopy during dissolution testing enabled analysis of dissolution as a whole, and provided a scientific explanation for changes in the dissolution rate. It was concluded that the methods applied and studied provide better process understanding and knowledge of the drug products, and therefore, a way to achieve better quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis examines the feasibility of a forest inventory method based on two-phase sampling in estimating forest attributes at the stand or substand levels for forest management purposes. The method is based on multi-source forest inventory combining auxiliary data consisting of remote sensing imagery or other geographic information and field measurements. Auxiliary data are utilized as first-phase data for covering all inventory units. Various methods were examined for improving the accuracy of the forest estimates. Pre-processing of auxiliary data in the form of correcting the spectral properties of aerial imagery was examined (I), as was the selection of aerial image features for estimating forest attributes (II). Various spatial units were compared for extracting image features in a remote sensing aided forest inventory utilizing very high resolution imagery (III). A number of data sources were combined and different weighting procedures were tested in estimating forest attributes (IV, V). Correction of the spectral properties of aerial images proved to be a straightforward and advantageous method for improving the correlation between the image features and the measured forest attributes. Testing different image features that can be extracted from aerial photographs (and other very high resolution images) showed that the images contain a wealth of relevant information that can be extracted only by utilizing the spatial organization of the image pixel values. Furthermore, careful selection of image features for the inventory task generally gives better results than inputting all extractable features to the estimation procedure. When the spatial units for extracting very high resolution image features were examined, an approach based on image segmentation generally showed advantages compared with a traditional sample plot-based approach. Combining several data sources resulted in more accurate estimates than any of the individual data sources alone. The best combined estimate can be derived by weighting the estimates produced by the individual data sources by the inverse values of their mean square errors. Despite the fact that the plot-level estimation accuracy in two-phase sampling inventory can be improved in many ways, the accuracy of forest estimates based mainly on single-view satellite and aerial imagery is a relatively poor basis for making stand-level management decisions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is intense activity in the area of theoretical chemistry of gold. It is now possible to predict new molecular species, and more recently, solids by combining relativistic methodology with isoelectronic thinking. In this thesis we predict a series of solid sheet-type crystals for Group-11 cyanides, MCN (M=Cu, Ag, Au), and Group-2 and 12 carbides MC2 (M=Be-Ba, Zn-Hg). The idea of sheets is then extended to nanostrips which can be bent to nanorings. The bending energies and deformation frequencies can be systematized by treating these molecules as an elastic bodies. In these species Au atoms act as an 'intermolecular glue'. Further suggested molecular species are the new uncongested aurocarbons, and the neutral Au_nHg_m clusters. Many of the suggested species are expected to be stabilized by aurophilic interactions. We also estimate the MP2 basis-set limit of the aurophilicity for the model compounds [ClAuPH_3]_2 and [P(AuPH_3)_4]^+. Beside investigating the size of the basis-set applied, our research confirms that the 19-VE TZVP+2f level, used a decade ago, already produced 74 % of the present aurophilic attraction energy for the [ClAuPH_3]_2 dimer. Likewise we verify the preferred C4v structure for the [P(AuPH_3)_4]^+ cation at the MP2 level. We also perform the first calculation on model aurophilic systems using the SCS-MP2 method and compare the results to high-accuracy CCSD(T) ones. The recently obtained high-resolution microwave spectra on MCN molecules (M=Cu, Ag, Au) provide an excellent testing ground for quantum chemistry. MP2 or CCSD(T) calculations, correlating all 19 valence electrons of Au and including BSSE and SO corrections, are able to give bond lengths to 0.6 pm, or better. Our calculated vibrational frequencies are expected to be better than the currently available experimental estimates. Qualitative evidence for multiple Au-C bonding in triatomic AuCN is also found.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polymer protected gold nanoparticles have successfully been synthesized by both "grafting-from" and "grafting-to" techniques. The synthesis methods of the gold particles were systematically studied. Two chemically different homopolymers were used to protect gold particles: thermo-responsive poly(N-isopropylacrylamide), PNIPAM, and polystyrene, PS. Both polymers were synthesized by using a controlled/living radical polymerization process, reversible addition-fragmentation chain transfer (RAFT) polymerization, to obtain monodisperse polymers of various molar masses and carrying dithiobenzoate end groups. Hence, particles protected either with PNIPAM, PNIPAM-AuNPs, or with a mixture of two polymers, PNIPAM/PS-AuNPs (i.e., amphiphilic gold nanoparticles), were prepared. The particles contain monodisperse polymer shells, though the cores are somewhat polydisperse. Aqueous PNIPAM-AuNPs prepared using a "grafting-from" technique, show thermo-responsive properties derived from the tethered PNIPAM chains. For PNIPAM-AuNPs prepared using a "grafting-to" technique, two-phase transitions of PNIPAM were observed in the microcalorimetric studies of the aqueous solutions. The first transition with a sharp and narrow endothermic peak occurs at lower temperature, and the second one with a broader peak at higher temperature. In the first transition PNIPAM segments show much higher cooperativity than in the second one. The observations are tentatively rationalized by assuming that the PNIPAM brush can be subdivided into two zones, an inner and an outer one. In the inner zone, the PNIPAM segments are close to the gold surface, densely packed, less hydrated, and undergo the first transition. In the outer zone, on the other hand, the PNIPAM segments are looser and more hydrated, adopt a restricted random coil conformation, and show a phase transition, which is dependent on both particle concentration and the chemical nature of the end groups of the PNIPAM chains. Monolayers of the amphiphilic gold nanoparticles at the air-water interface show several characteristic regions upon compression in a Langmuir trough at room temperature. These can be attributed to the polymer conformational transitions from a pancake to a brush. Also, the compression isotherms show temperature dependence due to the thermo-responsive properties of the tethered PNIPAM chains. The films were successfully deposited on substrates by Langmuir-Blodgett technique. The sessile drop contact angle measurements conducted on both sides of the monolayer deposited at room temperature reveal two slightly different contact angles, that may indicate phase separation between the tethered PNIPAM and PS chains on the gold core. The optical properties of amphiphilic gold nanoparticles were studied both in situ at the air-water interface and on the deposited films. The in situ SPR band of the monolayer shows a blue shift with compression, while a red shift with the deposition cycle occurs in the deposited films. The blue shift is compression-induced and closely related to the conformational change of the tethered PNIPAM chains, which may cause a decrease in the polarity of the local environment of the gold cores. The red shift in the deposited films is due to a weak interparticle coupling between adjacent particles. Temperature effects on the SPR band in both cases were also investigated. In the in situ case, at a constant surface pressure, an increase in temperature leads to a red shift in the SPR, likely due to the shrinking of the tethered PNIPAM chains, as well as to a slight decrease of the distance between the adjacent particles resulting in an increase in the interparticle coupling. However, in the case of the deposited films, the SPR band red-shifts with the deposition cycles more at a high temperature than at a low temperature. This is because the compressibility of the polymer coated gold nanoparticles at a high temperature leads to a smaller interparticle distance, resulting in an increase of the interparticle coupling in the deposited multilayers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetics, the science of heredity and variation in living organisms, has a central role in medicine, in breeding crops and livestock, and in studying fundamental topics of biological sciences such as evolution and cell functioning. Currently the field of genetics is under a rapid development because of the recent advances in technologies by which molecular data can be obtained from living organisms. In order that most information from such data can be extracted, the analyses need to be carried out using statistical models that are tailored to take account of the particular genetic processes. In this thesis we formulate and analyze Bayesian models for genetic marker data of contemporary individuals. The major focus is on the modeling of the unobserved recent ancestry of the sampled individuals (say, for tens of generations or so), which is carried out by using explicit probabilistic reconstructions of the pedigree structures accompanied by the gene flows at the marker loci. For such a recent history, the recombination process is the major genetic force that shapes the genomes of the individuals, and it is included in the model by assuming that the recombination fractions between the adjacent markers are known. The posterior distribution of the unobserved history of the individuals is studied conditionally on the observed marker data by using a Markov chain Monte Carlo algorithm (MCMC). The example analyses consider estimation of the population structure, relatedness structure (both at the level of whole genomes as well as at each marker separately), and haplotype configurations. For situations where the pedigree structure is partially known, an algorithm to create an initial state for the MCMC algorithm is given. Furthermore, the thesis includes an extension of the model for the recent genetic history to situations where also a quantitative phenotype has been measured from the contemporary individuals. In that case the goal is to identify positions on the genome that affect the observed phenotypic values. This task is carried out within the Bayesian framework, where the number and the relative effects of the quantitative trait loci are treated as random variables whose posterior distribution is studied conditionally on the observed genetic and phenotypic data. In addition, the thesis contains an extension of a widely-used haplotyping method, the PHASE algorithm, to settings where genetic material from several individuals has been pooled together, and the allele frequencies of each pool are determined in a single genotyping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Planar curves arise naturally as interfaces between two regions of the plane. An important part of statistical physics is the study of lattice models. This thesis is about the interfaces of 2D lattice models. The scaling limit is an infinite system limit which is taken by letting the lattice mesh decrease to zero. At criticality, the scaling limit of an interface is one of the SLE curves (Schramm-Loewner evolution), introduced by Oded Schramm. This family of random curves is parametrized by a real variable, which determines the universality class of the model. The first and the second paper of this thesis study properties of SLEs. They contain two different methods to study the whole SLE curve, which is, in fact, the most interesting object from the statistical physics point of view. These methods are applied to study two symmetries of SLE: reversibility and duality. The first paper uses an algebraic method and a representation of the Virasoro algebra to find common martingales to different processes, and that way, to confirm the symmetries for polynomial expected values of natural SLE data. In the second paper, a recursion is obtained for the same kind of expected values. The recursion is based on stationarity of the law of the whole SLE curve under a SLE induced flow. The third paper deals with one of the most central questions of the field and provides a framework of estimates for describing 2D scaling limits by SLE curves. In particular, it is shown that a weak estimate on the probability of an annulus crossing implies that a random curve arising from a statistical physics model will have scaling limits and those will be well-described by Loewner evolutions with random driving forces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many species inhabit fragmented landscapes, resulting either from anthropogenic or from natural processes. The ecological and evolutionary dynamics of spatially structured populations are affected by a complex interplay between endogenous and exogenous factors. The metapopulation approach, simplifying the landscape to a discrete set of patches of breeding habitat surrounded by unsuitable matrix, has become a widely applied paradigm for the study of species inhabiting highly fragmented landscapes. In this thesis, I focus on the construction of biologically realistic models and their parameterization with empirical data, with the general objective of understanding how the interactions between individuals and their spatially structured environment affect ecological and evolutionary processes in fragmented landscapes. I study two hierarchically structured model systems, which are the Glanville fritillary butterfly in the Åland Islands, and a system of two interacting aphid species in the Tvärminne archipelago, both being located in South-Western Finland. The interesting and challenging feature of both study systems is that the population dynamics occur over multiple spatial scales that are linked by various processes. My main emphasis is in the development of mathematical and statistical methodologies. For the Glanville fritillary case study, I first build a Bayesian framework for the estimation of death rates and capture probabilities from mark-recapture data, with the novelty of accounting for variation among individuals in capture probabilities and survival. I then characterize the dispersal phase of the butterflies by deriving a mathematical approximation of a diffusion-based movement model applied to a network of patches. I use the movement model as a building block to construct an individual-based evolutionary model for the Glanville fritillary butterfly metapopulation. I parameterize the evolutionary model using a pattern-oriented approach, and use it to study how the landscape structure affects the evolution of dispersal. For the aphid case study, I develop a Bayesian model of hierarchical multi-scale metapopulation dynamics, where the observed extinction and colonization rates are decomposed into intrinsic rates operating specifically at each spatial scale. In summary, I show how analytical approaches, hierarchical Bayesian methods and individual-based simulations can be used individually or in combination to tackle complex problems from many different viewpoints. In particular, hierarchical Bayesian methods provide a useful tool for decomposing ecological complexity into more tractable components.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Long-term monitoring data collected from wild smolts of Atlantic salmon (Salmo salar) in the Simojoki river, northern Finland, were used in studying the relationships between the smolt size and age, smolt and postsmolt migration, environmental conditions and postsmolt survival. The onset of the smolt run was significantly dependent on the rising water temperature and decreasing discharge of the river in the spring. The mean length of smolts migrating early in the season was commonly higher and the mean age always older than among smolts migrating later. Many of the smolts migrating early in the season and almost all smolts migrating later had started their new growth in spring in the river before their sea entry. Among postsmolts, the time required for emigration from the estuary was dependent on the sea surface temperature (SST) off the river, being significantly shorter in years with warm than cold sea temperatures. After leaving the estuary, the postsmolts migrated southwards along the eastern coast of the northern Gulf of Bothnia, the geographical distribution of the tag recoveries coinciding with the warm thermal zone in spring in the coastal area. After arriving in the southern Gulf of Bothnia in late summer the postsmolts mostly migrated near the western coast, reaching the Baltic Main Basin in late autumn. Until the early 1990s there was only a weak positive association between smolt length and postsmolt survival. However, following a subsequent decrease in the mean smolt size, a significant positive dependence was observed between smolt size and the reported recapture rate of tagged salmon. The differences in recapture rates between smolts tagged during the first and second half of the annual migration season were insignificant, indicating that the seasonal variation in smolt size and age seem to be too small to affect survival. Among the climatic factors examined, the summer SST in the Gulf of Bothnia was most clearly related to the survival of the wild postsmolts. Postsmolt survival appeared to be highest in years when the SST in June in the Bothnian Bay varied between 9 and 12 ºC. In addition, the survival of wild postsmolts showed a significant positive dependence on the SST in July in the Bothnian Sea, but not on the abundance of the prey fish (0+ herring, Clupea harengus and sprat, Sprattus sprattus) in the Bothnian Sea and in the Baltic Main Basin. The results suggest, that if the incidence of extreme weather conditions were to increase due to climatic changes, it would probably reduce the postsmolt survival of wild salmon populations. For improving the performance of hatchery-reared smolts, it could be useful to examine opportunities to produce smolts that are in their smolt traits and abilities more similar to the wild smolts described in this thesis.