35 resultados para Statistical Information on Recidivism (SIR)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remote sensing provides methods to infer land cover information over large geographical areas at a variety of spatial and temporal resolutions. Land cover is input data for a range of environmental models and information on land cover dynamics is required for monitoring the implications of global change. Such data are also essential in support of environmental management and policymaking. Boreal forests are a key component of the global climate and a major sink of carbon. The northern latitudes are expected to experience a disproportionate and rapid warming, which can have a major impact on vegetation at forest limits. This thesis examines the use of optical remote sensing for estimating aboveground biomass, leaf area index (LAI), tree cover and tree height in the boreal forests and tundra taiga transition zone in Finland. The continuous fields of forest attributes are required, for example, to improve the mapping of forest extent. The thesis focus on studying the feasibility of satellite data at multiple spatial resolutions, assessing the potential of multispectral, -angular and -temporal information, and provides regional evaluation for global land cover data. Preprocessed ASTER, MISR and MODIS products are the principal satellite data. The reference data consist of field measurements, forest inventory data and fine resolution land cover maps. Fine resolution studies demonstrate how statistical relationships between biomass and satellite data are relatively strong in single species and low biomass mountain birch forests in comparison to higher biomass coniferous stands. The combination of forest stand data and fine resolution ASTER images provides a method for biomass estimation using medium resolution MODIS data. The multiangular data improve the accuracy of land cover mapping in the sparsely forested tundra taiga transition zone, particularly in mires. Similarly, multitemporal data improve the accuracy of coarse resolution tree cover estimates in comparison to single date data. Furthermore, the peak of the growing season is not necessarily the optimal time for land cover mapping in the northern boreal regions. The evaluated coarse resolution land cover data sets have considerable shortcomings in northernmost Finland and should be used with caution in similar regions. The quantitative reference data and upscaling methods for integrating multiresolution data are required for calibration of statistical models and evaluation of land cover data sets. The preprocessed image products have potential for wider use as they can considerably reduce the time and effort used for data processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A wide range of models used in agriculture, ecology, carbon cycling, climate and other related studies require information on the amount of leaf material present in a given environment to correctly represent radiation, heat, momentum, water, and various gas exchanges with the overlying atmosphere or the underlying soil. Leaf area index (LAI) thus often features as a critical land surface variable in parameterisations of global and regional climate models, e.g., radiation uptake, precipitation interception, energy conversion, gas exchange and momentum, as all areas are substantially determined by the vegetation surface. Optical wavelengths of remote sensing are the common electromagnetic regions used for LAI estimations and generally for vegetation studies. The main purpose of this dissertation was to enhance the determination of LAI using close-range remote sensing (hemispherical photography), airborne remote sensing (high resolution colour and colour infrared imagery), and satellite remote sensing (high resolution SPOT 5 HRG imagery) optical observations. The commonly used light extinction models are applied at all levels of optical observations. For the sake of comparative analysis, LAI was further determined using statistical relationships between spectral vegetation index (SVI) and ground based LAI. The study areas of this dissertation focus on two regions, one located in Taita Hills, South-East Kenya characterised by tropical cloud forest and exotic plantations, and the other in Gatineau Park, Southern Quebec, Canada dominated by temperate hardwood forest. The sampling procedure of sky map of gap fraction and size from hemispherical photographs was proven to be one of the most crucial steps in the accurate determination of LAI. LAI and clumping index estimates were significantly affected by the variation of the size of sky segments for given zenith angle ranges. On sloping ground, gap fraction and size distributions present strong upslope/downslope asymmetry of foliage elements, and thus the correction and the sensitivity analysis for both LAI and clumping index computations were demonstrated. Several SVIs can be used for LAI mapping using empirical regression analysis provided that the sensitivities of SVIs at varying ranges of LAI are large enough. Large scale LAI inversion algorithms were demonstrated and were proven to be a considerably efficient alternative approach for LAI mapping. LAI can be estimated nonparametrically from the information contained solely in the remotely sensed dataset given that the upper-end (saturated SVI) value is accurately determined. However, further study is still required to devise a methodology as well as instrumentation to retrieve on-ground green leaf area index . Subsequently, the large scale LAI inversion algorithms presented in this work can be precisely validated. Finally, based on literature review and this dissertation, potential future research prospects and directions were recommended.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The information that the economic agents have and regard relevant to their decision making is often assumed to be exogenous in economics. It is assumed that the agents either poses or can observe the payoff relevant information without having to exert any effort to acquire it. In this thesis we relax the assumption of ex-ante fixed information structure and study what happens to the equilibrium behavior when the agents must also decide what information to acquire and when to acquire it. This thesis addresses this question in the two essays on herding and two essays on auction theory. In the first two essays, that are joint work with Klaus Kultti, we study herding models where it is costly to acquire information on the actions that the preceding agents have taken. In our model the agents have to decide both the action that they take and additionally the information that they want to acquire by observing their predecessors. We characterize the equilibrium behavior when the decision to observe preceding agents' actions is endogenous and show how the equilibrium outcome may differ from the standard model, where all preceding agents actions are assumed to be observable. In the latter part of this thesis we study two dynamic auctions: the English and the Dutch auction. We consider a situation where bidder(s) are uninformed about their valuations for the object that is put up for sale and they may acquire this information for a small cost at any point during the auction. We study the case of independent private valuations. In the third essay of the thesis we characterize the equilibrium behavior in an English auction when there are informed and uninformed bidders. We show that the informed bidder may jump bid and signal to the uninformed that he has a high valuation, thus deterring the uninformed from acquiring information and staying in the auction. The uninformed optimally acquires information once the price has passed a particular threshold and the informed has not signalled that his valuation is high. In addition, we provide an example of an information structure where the informed bidder initially waits and then makes multiple jumps. In the fourth essay of this thesis we study the Dutch auction. We consider two cases where all bidders are all initially uninformed. In the first case the information acquisition cost is the same across all bidders and in the second also the cost of information acquisition is independently distributed and private information to the bidders. We characterize a mixed strategy equilibrium in the first and a pure strategy equilibrium in the second case. In addition we provide a conjecture of an equilibrium in an asymmetric situation where there is one informed and one uninformed bidder. We compare the revenues that the first price auction and the Dutch auction generate and we find that under some circumstances the Dutch auction outperforms the first price sealed bid auction. The usual first price sealed bid auction and the Dutch auction are strategically equivalent. However, this equivalence breaks down in case information is acquired during the auction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transposons are mobile elements of genetic material that are able to move in the genomes of their host organisms using a special form of recombination called transposition. Bacteriophage Mu was the first transposon for which a cell-free in vitro transposition reaction was developed. Subsequently, the reaction has been refined and the minimal Mu in vitro reaction is useful in the generation of comprehensive libraries of mutant DNA molecules that can be used in a variety of applications. To date, the functional genetics applications of Mu in vitro technology have been subjected to either plasmids or genomic regions and entire genomes of viruses cloned on specific vectors. This study expands the use of Mu in vitro transposition in functional genetics and genomics by describing novel methods applicable to the targeted transgenesis of mouse and the whole-genome analysis of bacteriophages. The methods described here are rapid, efficient, and easily applicable to a wide variety of organisms, demonstrating the potential of the Mu transposition technology in the functional analysis of genes and genomes. First, an easy-to-use, rapid strategy to generate construct for the targeted mutagenesis of mouse genes was developed. To test the strategy, a gene encoding a neuronal K+/Cl- cotransporter was mutagenised. After a highly efficient transpositional mutagenesis, the gene fragments mutagenised were cloned into a vector backbone and transferred into bacterial cells. These constructs were screened with PCR using an effective 3D matrix system. In addition to traditional knock-out constructs, the method developed yields hypomorphic alleles that lead into reduced expression of the target gene in transgenic mice and have since been used in a follow-up study. Moreover, a scheme is devised to rapidly produce conditional alleles from the constructs produced. Next, an efficient strategy for the whole-genome analysis of bacteriophages was developed based on the transpositional mutagenesis of uncloned, infective virus genomes and their subsequent transfer into susceptible host cells. Mutant viruses able to produce viable progeny were collected and their transposon integration sites determined to map genomic regions nonessential to the viral life cycle. This method, applied here to three very different bacteriophages, PRD1, ΦYeO3 12, and PM2, does not require the target genome to be cloned and is directly applicable to all DNA and RNA viruses that have infective genomes. The method developed yielded valuable novel information on the three bacteriophages studied and whole-genome data can be complemented with concomitant studies on individual genes. Moreover, end-modified transposons constructed for this study can be used to manipulate genomes devoid of suitable restriction sites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cancer is a leading cause of death worldwide and the total number of cancer cases continues to increase. Many cancers, for example sinonasal cancer and lung cancer, have clear external risk factors and so are potentially preventable. The occurrence of sinonasal cancer is strongly associated with wood dust exposure and the main risk factor for lung cancer is tobacco smoking. Although the molecular mechanisms involved in lung carcinogenesis have been widely studied, very little is known about the molecular changes leading to sinonasal cancer. In this work, mutations in the tumour suppressor TP53 gene in cases of sinonasal cancer and lung cancer and the associations of these mutations with exposure factors were studied. In addition, another important mechanism in many cancers, inflammation, was explored by analyzing the expression of the inflammation related enzyme, COX-2, in sinonasal cancer. The results demonstrate that TP53 mutations are frequent in sinonasal cancer and lung cancer and in both cancers they are associated with exposure. In sinonasal cancer, the occurrence of TP53 mutation significantly increased in relation to long duration and high level of exposure to wood dust. Smoking was not associated with the overall occurrence of the TP53 mutation in sinonasal cancer, but was associated with multiple TP53 mutations. Furthermore, inflammation appears to play a part in sinonasal carcinogenesis as indicated by our results showing that the expression of COX-2 was associated with adenocarcinoma type of tumours, wood dust exposure and non-smoking. In lung cancer, we detected statistically significant associations between TP53 mutations and duration of smoking, gender and histology. We also found that patients with a tumour carrying a G to T transversion, a mutation commonly found in association with tobacco smoking, had a high level of smoking-related bulky DNA adducts in their non-tumorous lung tissue. Altogether, the information on molecular changes in exposure induced cancers adds to the observations from epidemiological studies and helps to understand the role and impact of different etiological factors, which in turn can be beneficial for risk assessment and prevention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This clinical study focused on effects of childhood specific language impairment (SLI) on daily functioning in late life. SLI is a neurobiological disorder with genetic predisposition and manifests as poor language production or comprehension or both in a child with age-level non-verbal intelligence and no other known cause for deficient language development. The prevalence rate of around 7% puts it among the most prevalent developmental disorders in childhood. Negative long-term effects, such as problems in learning and behavior, are frequent. In follow-up studies the focus has seldom been on self-perception of daily functioning and participation, which are considered important in the International Classification of Functioning, Disability, and Health (ICF). To investigate the self-perceived aspects of everyday functioning in individuals with childhood receptive SLI compared with age- and gender-matched control populations, the 15D, 16D, and 17D health-related quality of life (HRQoL) questionnaires were applied. These generic questionnaires include 15, 16, and 17 dimensions, respectively, and give both a single index score and a profile with values on each dimension. Information on different life domains (rehabilitation, education, employment etc.) from each age-group was collected with separate questionnaires. The study groups comprised adults, adolescents (12-16 years), and pre-adolescents (8-11 years) who had received a diagnosis of receptive SLI and had been examined, usually before school age, at the Department of Phoniatrics of Helsinki University Central Hospital, where children with language deficits caused by various etiologies are examined and treated by a multidisciplinary team. The adult respondents included 33 subjects with a mean age of 34 years. Measured with 15D, the subjects perceived their HRQoL to be nearly as good as that of their controls, but on the dimensions of speech, usual activities, mental functioning, and distress they were significantly worse off. They significantly more often lived with their parents (19%) or were pensioned (26%) than the adult Finnish population on average. Adults with self-perceived problems in finding words and in remembering instructions, manifestations of persistent language impairment, showed inferior every day functioning to the rest of the study group. Of the adolescents and pre-adolescents, 48 and 51, respectively, responded. The majority in both groups had received special education or extra educational support at school. They all had attended speech therapy at some point; at the time of the study only one adolescent, but every third pre-adolescent still received speech therapy. The 16D score of the adolescent or the 17D score of the pre-adolescents did not differ from that of their controls. The 16D profiles differed on some dimensions; subjects were significantly worse off on the dimension of mental functioning, but better off on the dimension of vitality than controls. Of the 17D dimensions, the study group was significantly worse off on speech, whereas the control group reported significantly more problems in sleeping. Of the childhood performance measures investigated, low verbal intelligence quotient (VIQ), which is often considered to reflect receptive language impairment, was in adults subjects significantly associated with some of the self-perceived problems, such as problems in usual activities and mental functioning. The 15D, 16D, and 17D questionnaires served well in measuring self-perceived HRQoL. Such standardized measures with population values are especially important in confirming with the ICF guidelines. In the future these questionnaires could perhaps be used on a more individual level in follow-up of children in clinics, and even in special schools and classes, to detect those children at greatest risk of negative long-term effects and perhaps diminished well-being regarding daily functioning and participation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The metabolic syndrome and type 1 diabetes are associated with brain alterations such as cognitive decline brain infarctions, atrophy, and white matter lesions. Despite the importance of these alterations, their pathomechanism is still poorly understood. This study was conducted to investigate brain glucose and metabolites in healthy individuals with an increased cardiovascular risk and in patients with type 1 diabetes in order to discover more information on the nature of the known brain alterations. We studied 43 20- to 45-year-old men. Study I compared two groups of non-diabetic men, one with an accumulation of cardiovascular risk factors and another without. Studies II to IV compared men with type 1 diabetes (duration of diabetes 6.7 ± 5.2 years, no microvascular complications) with non-diabetic men. Brain glucose, N-acetylaspartate (NAA), total creatine (tCr), choline, and myo-inositol (mI) were quantified with proton magnetic resonance spectroscopy in three cerebral regions: frontal cortex, frontal white matter, thalamus, and in cerebellar white matter. Data collection was performed for all participants during fasting glycemia and in a subgroup (Studies III and IV), also during a hyperglycemic clamp that increased plasma glucose concentration by 12 mmol/l. In non-diabetic men, the brain glucose concentration correlated linearly with plasma glucose concentration. The cardiovascular risk group (Study I) had a 13% higher plasma glucose concentration than the control group, but no difference in thalamic glucose content. The risk group thus had lower thalamic glucose content than expected. They also had 17% increased tCr (marker of oxidative metabolism). In the control group, tCr correlated with thalamic glucose content, but in the risk group, tCr correlated instead with fasting plasma glucose and 2-h plasma glucose concentration in the oral glucose tolerance test. Risk factors of the metabolic syndrome, most importantly insulin resistance, may thus influence brain metabolism. During fasting glycemia (Study II), regional variation in the cerebral glucose levels appeared in the non-diabetic subjects but not in those with diabetes. In diabetic patients, excess glucose had accumulated predominantly in the white matter where the metabolite alterations were also the most pronounced. Compared to the controls values, the white matter NAA (marker of neuronal metabolism) was 6% lower and mI (glia cell marker) 20% higher. Hyperglycemia is therefore a potent risk factor for diabetic brain disease and the metabolic brain alterations may appear even before any peripheral microvascular complications are detectable. During acute hyperglycemia (Study III), the increase in cerebral glucose content in the patients with type 1 diabetes was, dependent on brain region, between 1.1 and 2.0 mmol/l. An every-day hyperglycemic episode in a diabetic patient may therefore as much as double brain glucose concentration. While chronic hyperglycemia had led to accumulation of glucose in the white matter, acute hyperglycemia burdened predominantly the gray matter. Acute hyperglycemia also revealed that chronic fluctuation in blood glucose may be associated with alterations in glucose uptake or in metabolism in the thalamus. The cerebellar white matter appeared very differently from the cerebral (Study IV). In the non-diabetic men it contained twice as much glucose as the cerebrum. Diabetes had altered neither its glucose content nor the brain metabolites. The cerebellum seems therefore more resistant to the effects of hyperglycemia than is the cerebrum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interstellar clouds are not featureless, but show quite complex internal structures of filaments and clumps when observed with high enough resolution. These structures have been generated by 1) turbulent motions driven mainly by supernovae, 2) magnetic fields working on the ions and, through neutral-ion collisions, on neutral gas as well, and 3) self-gravity pulling a dense clump together to form a new star. The study of the cloud structure gives us information on the relative importance of each of these mechanisms, and helps us to gain a better understanding of the details of the star formation process. Interstellar dust is often used as a tracer for the interstellar gas which forms the bulk of the interstellar matter. Some of the methods that are used to derive the column density are summarized in this thesis. A new method, which uses the scattered light to map the column density in large fields with high spatial resolution, is introduced. This thesis also takes a look at the grain alignment with respect to the magnetic fields. The aligned grains give rise to the polarization of starlight and dust emission, thus revealing the magnetic field. The alignment mechanisms have been debated for the last half century. The strongest candidate at present is the radiative torques mechanism. In the first four papers included in this thesis, the scattered light method of column density estimation is formulated, tested in simulations, and finally used to obtain a column density map from observations. They demonstrate that the scattered light method is a very useful and reliable tool in column density estimation, and is able to provide higher resolution than the near-infrared color excess method. These two methods are complementary. The derived column density maps are also used to gain information on the dust emissivity within the observed cloud. The two final papers present simulations of polarized thermal dust emission assuming that the alignment happens by the radiative torques mechanism. We show that the radiative torques can explain the observed decline of the polarization degree towards dense cores. Furthermore, the results indicate that the dense cores themselves might not contribute significantly to the polarized signal, and hence one needs to be careful when interpreting the observations and deriving the magnetic field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several excited states of Ds and Bs mesons have been discovered in the last six years: BaBar, Cleo and Belle discovered the very narrow states D(s0)*(2317)+- and D(s1)(2460)+- in 2003, and CDF and DO Collaborations reported the observation of two narrow Bs resonances, B(s1)(5830)0 and B*(s2)(5840)0 in 2007. To keep up with experiment, meson excited states should be studied from the theoretical aspect as well. The theory that describes the interaction between quarks and gluons is quantum chromodynamics (QCD). In this thesis the properties of the meson states are studied using the discretized version of the theory - lattice QCD. This allows us to perform QCD calculations from first principles, and "measure" not just energies but also the radial distributions of the states on the lattice. This gives valuable theoretical information on the excited states, as we can extract the energy spectrum of a static-light meson up to D wave states (states with orbital angular momentum L=2). We are thus able to predict where some of the excited meson states should lie. We also pay special attention to the order of the states, to detect possible inverted spin multiplets in the meson spectrum, as predicted by H. Schnitzer in 1978. This inversion is connected to the confining potential of the strong interaction. The lattice simulations can also help us understand the strong interaction better, as the lattice data can be treated as "experimental" data and used in testing potential models. In this thesis an attempt is made to explain the energies and radial distributions in terms of a potential model based on a one-body Dirac equation. The aim is to get more information about the nature of the confining potential, as well as to test how well the one-gluon exchange potential explains the short range part of the interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atmospheric aerosol particles have significant climatic effects. Secondary new particle formation is a globally important source of these particles. Currently, the mechanisms of particle formation and the vapours participating in this process are, however, not truly understood. The recently developed Neutral cluster and Air Ion Spectrometer (NAIS) was widely used in field studies of atmospheric particle formation. The NAIS was calibrated and found to be in adequate agreement with the reference instruments. It was concluded that NAIS can be reliably used to measure ions and particles near the sizes where the atmospheric particle formation begins. The main focus of this thesis was to study new particle formation and participation of ions in this process. To attain this objective, particle and ion formation and growth rates were studied in various environments - at several field sites in Europe, in previously rarely studied sites in Antarctica and Siberia and also in an indoor environment. New particle formation was observed at all sites were studied and the observations were used as indicatives of the particle formation mechanisms. Particle size-dependent growth rates and nucleation mode hygroscopic growth factors were examined to obtain information on the particle growth. It was found that the atmospheric ions participate in the initial steps of new particle formation, although their contribution was minor in the boundary layer. The highest atmospheric particle formation rates were observed at the most polluted sites where the role of ions was the least pronounced. Furthermore, the increase of particle growth rate with size suggested that enhancement of the growth by ions was negligible. Participation of organic vapours in the particle growth was supported by laboratory and field observations. It was addressed that secondary new particle formation can also be a significant source of indoor air particles. These results, extending over a wide variety of environments, give support to previous observations and increase understanding on new particle formation on a global scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atmospheric aerosol particles have a significant impact on air quality, human health and global climate. The climatic effects of secondary aerosol are currently among the largest uncertainties limiting the scientific understanding of future and past climate changes. To better estimate the climatic importance of secondary aerosol particles, detailed information on atmospheric particle formation mechanisms and the vapours forming the aerosol is required. In this thesis we studied these issues by applying novel instrumentation in a boreal forest to obtain direct information on the very first steps of atmospheric nucleation and particle growth. Additionally, we used detailed laboratory experiments and process modelling to determine condensational growth properties, such as saturation vapour pressures, of dicarboxylic acids, which are organic acids often found in atmospheric samples. Based on our studies, we came to four main conclusions: 1) In the boreal forest region, both sulphurous compounds and organics are needed for secondary particle formation, the previous contributing mainly to particle formation and latter to growth; 2) A persistent pool of molecular clusters, both neutral and charged, is present and participates in atmospheric nucleation processes in boreal forests; 3) Neutral particle formation seems to dominate over ion-mediated mechanisms, at least in the boreal forest boundary layer; 4) The subcooled liquid phase saturation vapour pressures of C3-C9 dicarboxylic acids are of the order of 1e-5 1e-3 Pa at atmospheric temperatures, indicating that a mixed pre-existing particulate phase is required for their condensation in atmospheric conditions. The work presented in this thesis gives tools to better quantify the aerosol source provided by secondary aerosol formation. The results are particularly useful when estimating, for instance, anthropogenic versus biogenic influences and the fractions of secondary aerosol formation explained by neutral or ion-mediated nucleation mechanisms, at least in environments where the average particle formation rates are of the order of some tens of particles per cubic centimeter or lower. However, as the factors driving secondary particle formation are likely to vary depending on the environment, measurements on atmospheric nucleation and particle growth are needed from around the world to be able to better describe the secondary particle formation, and assess its climatic effects on a global scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New stars form in dense interstellar clouds of gas and dust called molecular clouds. The actual sites where the process of star formation takes place are the dense clumps and cores deeply embedded in molecular clouds. The details of the star formation process are complex and not completely understood. Thus, determining the physical and chemical properties of molecular cloud cores is necessary for a better understanding of how stars are formed. Some of the main features of the origin of low-mass stars, like the Sun, are already relatively well-known, though many details of the process are still under debate. The mechanism through which high-mass stars form, on the other hand, is poorly understood. Although it is likely that the formation of high-mass stars shares many properties similar to those of low-mass stars, the very first steps of the evolutionary sequence are unclear. Observational studies of star formation are carried out particularly at infrared, submillimetre, millimetre, and radio wavelengths. Much of our knowledge about the early stages of star formation in our Milky Way galaxy is obtained through molecular spectral line and dust continuum observations. The continuum emission of cold dust is one of the best tracers of the column density of molecular hydrogen, the main constituent of molecular clouds. Consequently, dust continuum observations provide a powerful tool to map large portions across molecular clouds, and to identify the dense star-forming sites within them. Molecular line observations, on the other hand, provide information on the gas kinematics and temperature. Together, these two observational tools provide an efficient way to study the dense interstellar gas and the associated dust that form new stars. The properties of highly obscured young stars can be further examined through radio continuum observations at centimetre wavelengths. For example, radio continuum emission carries useful information on conditions in the protostar+disk interaction region where protostellar jets are launched. In this PhD thesis, we study the physical and chemical properties of dense clumps and cores in both low- and high-mass star-forming regions. The sources are mainly studied in a statistical sense, but also in more detail. In this way, we are able to examine the general characteristics of the early stages of star formation, cloud properties on large scales (such as fragmentation), and some of the initial conditions of the collapse process that leads to the formation of a star. The studies presented in this thesis are mainly based on molecular line and dust continuum observations. These are combined with archival observations at infrared wavelengths in order to study the protostellar content of the cloud cores. In addition, centimetre radio continuum emission from young stellar objects (YSOs; i.e., protostars and pre-main sequence stars) is studied in this thesis to determine their evolutionary stages. The main results of this thesis are as follows: i) filamentary and sheet-like molecular cloud structures, such as infrared dark clouds (IRDCs), are likely to be caused by supersonic turbulence but their fragmentation at the scale of cores could be due to gravo-thermal instability; ii) the core evolution in the Orion B9 star-forming region appears to be dynamic and the role played by slow ambipolar diffusion in the formation and collapse of the cores may not be significant; iii) the study of the R CrA star-forming region suggests that the centimetre radio emission properties of a YSO are likely to change with its evolutionary stage; iv) the IRDC G304.74+01.32 contains candidate high-mass starless cores which may represent the very first steps of high-mass star and star cluster formation; v) SiO outflow signatures are seen in several high-mass star-forming regions which suggest that high-mass stars form in a similar way as their low-mass counterparts, i.e., via disk accretion. The results presented in this thesis provide constraints on the initial conditions and early stages of both low- and high-mass star formation. In particular, this thesis presents several observational results on the early stages of clustered star formation, which is the dominant mode of star formation in our Galaxy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work examines stable isotope ratios of carbon, oxygen and hydrogen in annual growth rings of trees. Isotopic composition in wood cellulose is used as a tool to study past climate. The method benefits from the accurate and precise dating provided by dendrochronology. In this study the origin, nature and the strength of climatic correlations are studied on different temporal scales and at different sites in Finland. The origin of carbon isotopic signal is in photosynthetic fractionation. The basic physical and chemical fractionations involved are reasonably well understood. This was confirmed by measuring instantaneous photosynthetic discrimination on Scots pine (Pinus sylvestris L.). The internal conductance of CO2 was recognized to have a significant impact on the observed fractionation, and further investigations are suggested to quantify its role in controlling the isotopic signal of photosynthates. Isotopic composition of the produced biomass can potentially be affected by variety of external factors that induce physiological changes in trees. Response of carbon isotopic signal in tree ring cellulose to changes in resource availability was assessed in a manipulation experiment. It showed that the signal was relatively stable despite of changes in water and nitrogen availability to the tree. Palaeoclimatic reconstructions are typically based on functions describing empirical relationship between isotopic and climatic parameters. These empirical relationships may change depending on the site conditions, species and timeframe studied. Annual variation in Scots pine tree ring carbon and oxygen isotopic composition was studied in northern and in central eastern Finland and annual variation in tree ring latewood carbon, oxygen and hydrogen isotopic ratio in Oak (Quercus robur L.) was studied in southern Finland. In all of the studied sites at least one of the studied isotope ratios was shown to record climate strongly enough to be used in climatic reconstructions. Using the observed relationships, four-century-long climate reconstructions from living Scots pine were created for northern and central eastern Finland. Also temporal stability of the relationships between three proxy indicators, tree ring growth and carbon and oxygen isotopic composition was studied during the four-hundred-year period. Isotope ratios measured from tree rings in Finland were shown to be sensitive indicators of climate. Increasing understanding of environmental controls and physiological mechanisms affecting tree ring isotopic composition will make possible more accurate interpretation of isotope data. This study also demonstrated that by measuring multiple isotopes and physical proxies from the same tree rings, additional information on tree physiology can be obtained. Thus isotopic ratios measured from tree ring cellulose provide means to improve the reliability of climate reconstructions.