15 resultados para large-scale structures, filaments, clusters, radio galaxy, diffuse emission

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Light scattering, or scattering and absorption of electromagnetic waves, is an important tool in all remote-sensing observations. In astronomy, the light scattered or absorbed by a distant object can be the only source of information. In Solar-system studies, the light-scattering methods are employed when interpreting observations of atmosphereless bodies such as asteroids, atmospheres of planets, and cometary or interplanetary dust. Our Earth is constantly monitored from artificial satellites at different wavelengths. With remote sensing of Earth the light-scattering methods are not the only source of information: there is always the possibility to make in situ measurements. The satellite-based remote sensing is, however, superior in the sense of speed and coverage if only the scattered signal can be reliably interpreted. The optical properties of many industrial products play a key role in their quality. Especially for products such as paint and paper, the ability to obscure the background and to reflect light is of utmost importance. High-grade papers are evaluated based on their brightness, opacity, color, and gloss. In product development, there is a need for computer-based simulation methods that could predict the optical properties and, therefore, could be used in optimizing the quality while reducing the material costs. With paper, for instance, pilot experiments with an actual paper machine can be very time- and resource-consuming. The light-scattering methods presented in this thesis solve rigorously the interaction of light and material with wavelength-scale structures. These methods are computationally demanding, thus the speed and accuracy of the methods play a key role. Different implementations of the discrete-dipole approximation are compared in the thesis and the results provide practical guidelines in choosing a suitable code. In addition, a novel method is presented for the numerical computations of orientation-averaged light-scattering properties of a particle, and the method is compared against existing techniques. Simulation of light scattering for various targets and the possible problems arising from the finite size of the model target are discussed in the thesis. Scattering by single particles and small clusters is considered, as well as scattering in particulate media, and scattering in continuous media with porosity or surface roughness. Various techniques for modeling the scattering media are presented and the results are applied to optimizing the structure of paper. However, the same methods can be applied in light-scattering studies of Solar-system regoliths or cometary dust, or in any remote-sensing problem involving light scattering in random media with wavelength-scale structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an earlier study, we reported on the excitation of large-scale vortices in Cartesian hydrodynamical convection models subject to rapid enough rotation. In that study, the conditions for the onset of the instability were investigated in terms of the Reynolds (Re) and Coriolis (Co) numbers in models located at the stellar North pole. In this study, we extend our investigation to varying domain sizes, increasing stratification, and place the box at different latitudes. The effect of the increasing box size is to increase the sizes of the generated structures, so that the principal vortex always fills roughly half of the computational domain. The instability becomes stronger in the sense that the temperature anomaly and change in the radial velocity are observed to be enhanced. The model with the smallest box size is found to be stable against the instability, suggesting that a sufficient scale separation between the convective eddies and the scale of the domain is required for the instability to work. The instability can be seen upto the colatitude of 30 degrees, above which value the flow becomes dominated by other types of mean flows. The instability can also be seen in a model with larger stratification. Unlike the weakly stratified cases, the temperature anomaly caused by the vortex structures is seen to depend on depth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, concern has arisen over the effects of increasing carbon dioxide (CO2) in the earth's atmosphere due to the burning of fossil fuels. One way to mitigate increase in atmospheric CO2 concentration and climate change is carbon sequestration to forest vegeta-tion through photosynthesis. Comparable regional scale estimates for the carbon balance of forests are therefore needed for scientific and political purposes. The aim of the present dissertation was to improve methods for quantifying and verifying inventory-based carbon pool estimates of the boreal forests in the mineral soils. Ongoing forest inventories provide a data based on statistically sounded sampling for estimating the level of carbon stocks and stock changes, but improved modelling tools and comparison of methods are still needed. In this dissertation, the entire inventory-based large-scale forest carbon stock assessment method was presented together with some separate methods for enhancing and comparing it. The enhancement methods presented here include ways to quantify the biomass of understorey vegetation as well as to estimate the litter production of needles and branches. In addition, the optical remote sensing method illustrated in this dis-sertation can be used to compare with independent data. The forest inventory-based large-scale carbon stock assessment method demonstrated here provided reliable carbon estimates when compared with independent data. Future ac-tivity to improve the accuracy of this method could consist of reducing the uncertainties regarding belowground biomass and litter production as well as the soil compartment. The methods developed will serve the needs for UNFCCC reporting and the reporting under the Kyoto Protocol. This method is principally intended for analysts or planners interested in quantifying carbon over extensive forest areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ilmasto vaikuttaa ekologisiin prosesseihin eri tasoilla. Suuren mittakaavan ilmastoprosessit, yhdessä ilmakehän ja valtamerien kanssa, säätelevät paikallisia sääilmiöitä suurilla alueilla (mantereista pallopuoliskoihin). Tämä väistöskirja pyrkii selittämään kuinka suuren mittakaavan ilmasto on vaikuttanut tiettyihin ekologisiin prosesseihin pohjoisella havumetsäalueella. Valitut prosessit olivat puiden vuosilustojen kasvu, metsäpalojen esiintyminen ja vuoristomäntykovakuoriaisen aiheuttamat puukuolemat. Suuren mittakaavan ilmaston löydettiin vaikuttaneen näiden prosessien esiintymistiheyteen, kestoon ja levinneisyyteen keskeisten sään muuttujien välityksellä hyvin laajoilla alueilla. Tutkituilla prosesseilla oli vahva yhteys laajan mittakaavan ilmastoon. Yhteys on kuitenkin ollut hyvin dynaaminen ja muuttunut 1900-luvulla ilmastonmuutoksen aiheuttaessa muutoksia suuren mittakaavan ja alueellisten ilmastoprosessien välisiin sisäisiin suhteisiin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large-scale chromosome rearrangements such as copy number variants (CNVs) and inversions encompass a considerable proportion of the genetic variation between human individuals. In a number of cases, they have been closely linked with various inheritable diseases. Single-nucleotide polymorphisms (SNPs) are another large part of the genetic variance between individuals. They are also typically abundant and their measuring is straightforward and cheap. This thesis presents computational means of using SNPs to detect the presence of inversions and deletions, a particular variety of CNVs. Technically, the inversion-detection algorithm detects the suppressed recombination rate between inverted and non-inverted haplotype populations whereas the deletion-detection algorithm uses the EM-algorithm to estimate the haplotype frequencies of a window with and without a deletion haplotype. As a contribution to population biology, a coalescent simulator for simulating inversion polymorphisms has been developed. Coalescent simulation is a backward-in-time method of modelling population ancestry. Technically, the simulator also models multiple crossovers by using the Counting model as the chiasma interference model. Finally, this thesis includes an experimental section. The aforementioned methods were tested on synthetic data to evaluate their power and specificity. They were also applied to the HapMap Phase II and Phase III data sets, yielding a number of candidates for previously unknown inversions, deletions and also correctly detecting known such rearrangements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the past ten years, large-scale transcript analysis using microarrays has become a powerful tool to identify and predict functions for new genes. It allows simultaneous monitoring of the expression of thousands of genes and has become a routinely used tool in laboratories worldwide. Microarray analysis will, together with other functional genomics tools, take us closer to understanding the functions of all genes in genomes of living organisms. Flower development is a genetically regulated process which has mostly been studied in the traditional model species Arabidopsis thaliana, Antirrhinum majus and Petunia hybrida. The molecular mechanisms behind flower development in them are partly applicable in other plant systems. However, not all biological phenomena can be approached with just a few model systems. In order to understand and apply the knowledge to ecologically and economically important plants, other species also need to be studied. Sequencing of 17 000 ESTs from nine different cDNA libraries of the ornamental plant Gerbera hybrida made it possible to construct a cDNA microarray with 9000 probes. The probes of the microarray represent all different ESTs in the database. From the gerbera ESTs 20% were unique to gerbera while 373 were specific to the Asteraceae family of flowering plants. Gerbera has composite inflorescences with three different types of flowers that vary from each other morphologically. The marginal ray flowers are large, often pigmented and female, while the central disc flowers are smaller and more radially symmetrical perfect flowers. Intermediate trans flowers are similar to ray flowers but smaller in size. This feature together with the molecular tools applied to gerbera, make gerbera a unique system in comparison to the common model plants with only a single kind of flowers in their inflorescence. In the first part of this thesis, conditions for gerbera microarray analysis were optimised including experimental design, sample preparation and hybridization, as well as data analysis and verification. Moreover, in the first study, the flower and flower organ-specific genes were identified. After the reliability and reproducibility of the method were confirmed, the microarrays were utilized to investigate transcriptional differences between ray and disc flowers. This study revealed novel information about the morphological development as well as the transcriptional regulation of early stages of development in various flower types of gerbera. The most interesting finding was differential expression of MADS-box genes, suggesting the existence of flower type-specific regulatory complexes in the specification of different types of flowers. The gerbera microarray was further used to profile changes in expression during petal development. Gerbera ray flower petals are large, which makes them an ideal model to study organogenesis. Six different stages were compared and specifically analysed. Expression profiles of genes related to cell structure and growth implied that during stage two, cells divide, a process which is marked by expression of histones, cyclins and tubulins. Stage 4 was found to be a transition stage between cell division and expansion and by stage 6 cells had stopped division and instead underwent expansion. Interestingly, at the last analysed stage, stage 9, when cells did not grow any more, the highest number of upregulated genes was detected. The gerbera microarray is a fully-functioning tool for large-scale studies of flower development and correlation with real-time RT-PCR results show that it is also highly sensitive and reliable. Gene expression data presented here will be a source for gene expression mining or marker gene discovery in the future studies that will be performed in the Gerbera Laboratory. The publicly available data will also serve the plant research community world-wide.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Capercaillie (Tetrao urogallus L.) is often used as a focal species for landscape ecological studies: the minimum size for its lekking area is 300 ha, and the annual home range for an individual may cover 30 80 km2. In Finland, Capercaillie populations have decreased by approximately 40 85%, with the declines likely to have started in the 1940s. Although the declines have partly stabilized from the 1990s onwards, it is obvious that the negative population trend was at least partly caused by changes in human land use. The aim of this thesis was to study the connections between human land use and Capercaillie populations in Finland, using several spatial and temporal scales. First, the effect of forest age structure on Capercaillie population trends was studied in 18 forestry board districts in Finland, during 1965 1988. Second, the abundances of Capercaillie and Moose (Alces alces L.) were compared in terms of several land-use variables on a scale of 50 × 50 km grids and in five regions in Finland. Third, the effects of forest cover and fine-grain forest fragmentation on Capercaillie lekking area persistence were studied in three study locations in Finland, on 1000 and 3000 m spatial scales surrounding the leks. The analyses considering lekking areas were performed with two definitions for forest: > 60 and > 152 m3ha 1 of timber volume. The results show that patterns and processes at large spatial scales strongly influence Capercaillie in Finland. In particular, in southwestern and eastern Finland, high forest cover and low human impact were found to be beneficial for this species. Forest cover (> 60 m3ha 1 of timber) surrounding the lekking sites positively affected lekking area persistence only at the larger landscape scale (3000 m radius). The effects of older forest classes were hard to assess due to scarcity of older forests in several study areas. Young and middle-aged forest classes were common in the vicinity of areas with high Capercaillie abundances especially in northern Finland. The increase in the amount of younger forest classes did not provide a good explanation for Capercaillie population decline in 1965 1988. In addition, there was no significant connection between mature forests (> 152 m3ha 1 of timber) and lekking area persistence in Finland. It seems that in present-day Finnish landscapes, area covered with old forest is either too scarce to efficiently explain the abundance of Capercaillie and the persistence of the lekking areas, or the effect of forest age is only important when considering smaller spatial scales than the ones studied in this thesis. In conclusion, larger spatial scales should be considered for assessing the future Capercaillie management. According to the proposed multi-level planning, the first priority should be to secure the large, regional-scale forest cover, and the second priority should be to maintain fine-grained, heterogeneous structure within the separate forest patches. A management unit covering hundreds of hectares, or even tens or hundreds of square kilometers, should be covered, which requires regional-level land-use planning and co-operation between forest owners.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Earlier work has suggested that large-scale dynamos can reach and maintain equipartition field strengths on a dynamical time scale only if magnetic helicity of the fluctuating field can be shed from the domain through open boundaries. To test this scenario in convection-driven dynamos by comparing results for open and closed boundary conditions. Three-dimensional numerical simulations of turbulent compressible convection with shear and rotation are used to study the effects of boundary conditions on the excitation and saturation level of large-scale dynamos. Open (vertical field) and closed (perfect conductor) boundary conditions are used for the magnetic field. The contours of shear are vertical, crossing the outer surface, and are thus ideally suited for driving a shear-induced magnetic helicity flux. We find that for given shear and rotation rate, the growth rate of the magnetic field is larger if open boundary conditions are used. The growth rate first increases for small magnetic Reynolds number, Rm, but then levels off at an approximately constant value for intermediate values of Rm. For large enough Rm, a small-scale dynamo is excited and the growth rate in this regime increases proportional to Rm^(1/2). In the nonlinear regime, the saturation level of the energy of the mean magnetic field is independent of Rm when open boundaries are used. In the case of perfect conductor boundaries, the saturation level first increases as a function of Rm, but then decreases proportional to Rm^(-1) for Rm > 30, indicative of catastrophic quenching. These results suggest that the shear-induced magnetic helicity flux is efficient in alleviating catastrophic quenching when open boundaries are used. The horizontally averaged mean field is still weakly decreasing as a function of Rm even for open boundaries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human resource (HR) function is under pressure both to change roles and to play a large variety of roles. Questions of change and development in the HR function become particularly interesting in the context of mergers and acquisitions when two corporations are integrated. The purpose of the thesis is to examine the roles played by the HR function in the context of large-scale mergers and thus to understand what happens to the HR function in such change environments, and to shed light on the underlying factors that influence changes in the HR function. To achieve this goal, the study seeks first to identify the roles played by the HR function before and after the merger, and second, to identify the factors that affect the roles played by the HR function. It adopts a qualitative case study approach including ten focal case organisations (mergers) and four matching cases (non-mergers). The sample consists of large corporations originating from either Finland or Sweden. HR directors and members of the top management teams within the case organisations were interviewed. The study suggests that changes occur within the HR function, and that the trend is for the HR function to become increasingly strategic. However, the HR function was found to play strategic roles only when the HR administration ran smoothly. The study also suggests that the HR function has become more versatile. An HR function that was perceived to be mainly administrative before the merger is likely after the merger to perform some strategically important activities in addition to the administrative ones. Significant changes in the roles played by the HR function were observed in some of the case corporations. This finding suggests that the merger integration process is a window of opportunity for the HR function. HR functions that take a proactive and leading role during the integration process might expand the number of roles played and move from being an administrator before the merger to also being a business partner after integration. The majority of the HR functions studied remained mainly reactive during the organisational change process and although the evidence showed that they moved towards strategic tasks, the intra-functional changes remained comparatively small in these organisations. The study presents a new model that illustrates the impact of the relationship between the top management team and the HR function on the role of the HR function. The expectations held by the top management team for the HR function and the performance of the HR function were found to interact. On a dimension reaching from tactical to strategic, HR performance is likely to correspond to the expectations held by top management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last decades mean-field models, in which large-scale magnetic fields and differential rotation arise due to the interaction of rotation and small-scale turbulence, have been enormously successful in reproducing many of the observed features of the Sun. In the meantime, new observational techniques, most prominently helioseismology, have yielded invaluable information about the interior of the Sun. This new information, however, imposes strict conditions on mean-field models. Moreover, most of the present mean-field models depend on knowledge of the small-scale turbulent effects that give rise to the large-scale phenomena. In many mean-field models these effects are prescribed in ad hoc fashion due to the lack of this knowledge. With large enough computers it would be possible to solve the MHD equations numerically under stellar conditions. However, the problem is too large by several orders of magnitude for the present day and any foreseeable computers. In our view, a combination of mean-field modelling and local 3D calculations is a more fruitful approach. The large-scale structures are well described by global mean-field models, provided that the small-scale turbulent effects are adequately parameterized. The latter can be achieved by performing local calculations which allow a much higher spatial resolution than what can be achieved in direct global calculations. In the present dissertation three aspects of mean-field theories and models of stars are studied. Firstly, the basic assumptions of different mean-field theories are tested with calculations of isotropic turbulence and hydrodynamic, as well as magnetohydrodynamic, convection. Secondly, even if the mean-field theory is unable to give the required transport coefficients from first principles, it is in some cases possible to compute these coefficients from 3D numerical models in a parameter range that can be considered to describe the main physical effects in an adequately realistic manner. In the present study, the Reynolds stresses and turbulent heat transport, responsible for the generation of differential rotation, were determined along the mixing length relations describing convection in stellar structure models. Furthermore, the alpha-effect and magnetic pumping due to turbulent convection in the rapid rotation regime were studied. The third area of the present study is to apply the local results in mean-field models, which task we start to undertake by applying the results concerning the alpha-effect and turbulent pumping in mean-field models describing the solar dynamo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advancements in the analysis techniques have led to a rapid accumulation of biological data in databases. Such data often are in the form of sequences of observations, examples including DNA sequences and amino acid sequences of proteins. The scale and quality of the data give promises of answering various biologically relevant questions in more detail than what has been possible before. For example, one may wish to identify areas in an amino acid sequence, which are important for the function of the corresponding protein, or investigate how characteristics on the level of DNA sequence affect the adaptation of a bacterial species to its environment. Many of the interesting questions are intimately associated with the understanding of the evolutionary relationships among the items under consideration. The aim of this work is to develop novel statistical models and computational techniques to meet with the challenge of deriving meaning from the increasing amounts of data. Our main concern is on modeling the evolutionary relationships based on the observed molecular data. We operate within a Bayesian statistical framework, which allows a probabilistic quantification of the uncertainties related to a particular solution. As the basis of our modeling approach we utilize a partition model, which is used to describe the structure of data by appropriately dividing the data items into clusters of related items. Generalizations and modifications of the partition model are developed and applied to various problems. Large-scale data sets provide also a computational challenge. The models used to describe the data must be realistic enough to capture the essential features of the current modeling task but, at the same time, simple enough to make it possible to carry out the inference in practice. The partition model fulfills these two requirements. The problem-specific features can be taken into account by modifying the prior probability distributions of the model parameters. The computational efficiency stems from the ability to integrate out the parameters of the partition model analytically, which enables the use of efficient stochastic search algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ever-increasing demand for faster computers in various areas, ranging from entertaining electronics to computational science, is pushing the semiconductor industry towards its limits on decreasing the sizes of electronic devices based on conventional materials. According to the famous law by Gordon E. Moore, a co-founder of the world s largest semiconductor company Intel, the transistor sizes should decrease to the atomic level during the next few decades to maintain the present rate of increase in the computational power. As leakage currents become a problem for traditional silicon-based devices already at sizes in the nanometer scale, an approach other than further miniaturization is needed to accomplish the needs of the future electronics. A relatively recently proposed possibility for further progress in electronics is to replace silicon with carbon, another element from the same group in the periodic table. Carbon is an especially interesting material for nanometer-sized devices because it forms naturally different nanostructures. Furthermore, some of these structures have unique properties. The most widely suggested allotrope of carbon to be used for electronics is a tubular molecule having an atomic structure resembling that of graphite. These carbon nanotubes are popular both among scientists and in industry because of a wide list of exciting properties. For example, carbon nanotubes are electronically unique and have uncommonly high strength versus mass ratio, which have resulted in a multitude of proposed applications in several fields. In fact, due to some remaining difficulties regarding large-scale production of nanotube-based electronic devices, fields other than electronics have been faster to develop profitable nanotube applications. In this thesis, the possibility of using low-energy ion irradiation to ease the route towards nanotube applications is studied through atomistic simulations on different levels of theory. Specifically, molecular dynamic simulations with analytical interaction models are used to follow the irradiation process of nanotubes to introduce different impurity atoms into these structures, in order to gain control on their electronic character. Ion irradiation is shown to be a very efficient method to replace carbon atoms with boron or nitrogen impurities in single-walled nanotubes. Furthermore, potassium irradiation of multi-walled and fullerene-filled nanotubes is demonstrated to result in small potassium clusters in the hollow parts of these structures. Molecular dynamic simulations are further used to give an example on using irradiation to improve contacts between a nanotube and a silicon substrate. Methods based on the density-functional theory are used to gain insight on the defect structures inevitably created during the irradiation. Finally, a new simulation code utilizing the kinetic Monte Carlo method is introduced to follow the time evolution of irradiation-induced defects on carbon nanotubes on macroscopic time scales. Overall, the molecular dynamic simulations presented in this thesis show that ion irradiation is a promisingmethod for tailoring the nanotube properties in a controlled manner. The calculations made with density-functional-theory based methods indicate that it is energetically favorable for even relatively large defects to transform to keep the atomic configuration as close to the pristine nanotube as possible. The kinetic Monte Carlo studies reveal that elevated temperatures during the processing enhance the self-healing of nanotubes significantly, ensuring low defect concentrations after the treatment with energetic ions. Thereby, nanotubes can retain their desired properties also after the irradiation. Throughout the thesis, atomistic simulations combining different levels of theory are demonstrated to be an important tool for determining the optimal conditions for irradiation experiments, because the atomic-scale processes at short time scales are extremely difficult to study by any other means.