42 resultados para Virial Masses
em Helda - Digital Repository of University of Helsinki
Resumo:
The topic of this study is the most renowned anthology of essays written in Literary Chinese, Guwen guanzhi, compiled and edited by Wu Chengquan (Chucai) and Wu Dazhi (Diaohou), and first published during the Qing dynasty, in 1695. Because of the low social standing of the compilers, their anthology remained outside the recommended study materials produced by members of the established literati and used for preparing students in the imperial civil-service examinations. However, since the end of the imperial era, Guwen guanzhi has risen to a position as the classical anthology par excellence. Today it is widely used as required or supplementary reading material of Literary Chinese in middle-schools both in Mainland China and on Taiwan. The goal of this study is to explain the persistent longevity of the anthology. So far, Guwen guanzhi has not been a topic of any published academic study, and the opinions expressed on it in various sources are widely discrepant. Through a comparative study with a dozen classical Chinese anthologies in use during the early Qing dynasty, this study reveals the extent to which the compilers of Guwen guanzhi modelled their work after other selections. Altogether 86 % of the texts in Guwen guanzhi originate from another Qing era anthology, Guwen xiyi, often copied character by character. However, the notes and commentaries are all different. Concentrating on the special characteristics unique to Guwen guanzhi—the commentaries and certain peculiarities in the selection of texts—this study then discusses the possible reasons for the popularity of Guwen guanzhi over the competing readers during the Qing era. Most remarkably, Guwen guanzhi put in practise the equalitarian, educational ideals of the Ming philosopher Wang Shouren (Yangming). Thus Guwen guanzhi suited the self-enlightenment needs of the ”subordinate classes”, in particular the rising middle-class comprised mainly of merchants. The lack of moral teleology, together with the compact size, relative comprehensiveness of the selection and good notes and comments, have made Guwen guanzhi well suited for the new society since the abolition of the imperial examination system. Through a content analysis, based on a sample of the texts, this study measures the relative emphasis on centralism and localism (both in concrete and spiritual terms) expressed in the texts of Guwen guanzhi. The analysis shows that the texts manifest some bias towards emphasising innate virtue on the expense of state-defined moral. This may reflect hidden critique towards intellectual oppression by the centralised imperial rule. During the early decades of the Qing era, such critique was often linked to Ming-loyalism. Finally, this study concludes that the kind of ”spiritual localism” that Guwen guanzhi manifests gives it the potential to undermine monolithic orthodoxy even in today’s Chinese societies. This study has progressed hand in hand with the translation of a selection of texts from Guwen guanzhi into Finnish, published by Gaudeamus Helsinki University Press: Jadekasvot – Valittuja tarinoita Kiinan muinaisajoilta (2005), Jadelähde – Valittuja kirjoituksia Kiinan keskiajalta (2007) and Jadepeili – Valittuja kirjoituksia keisarillisen Kiinan kulta-ajoilta (2008). All translations are critical editions, complete with extensive notation. The trilogy is the first comprehensive translation based on Guwen guanzhi in a European language.
Resumo:
This study is a pragmatic description of the evolution of the genre of English witchcraft pamphlets from the mid-sixteenth century to the end of the seventeenth century. Witchcraft pamphlets were produced for a new kind of readership semi-literate, uneducated masses and the central hypothesis of this study is that publishing for the masses entailed rethinking the ways of writing and printing texts. Analysis of the use of typographical variation and illustrations indicates how printers and publishers catered to the tastes and expectations of this new audience. Analysis of the language of witchcraft pamphlets shows how pamphlet writers took into account the new readership by transforming formal written source materials trial proceedings into more immediate ways of writing. The material for this study comes from the Corpus of Early Modern English Witchcraft Pamphlets, which has been compiled by the author. The multidisciplinary analysis incorporates both visual and linguistic aspects of the texts, with methodologies and theoretical insights adopted eclectically from historical pragmatics, genre studies, book history, corpus linguistics, systemic functional linguistics and cognitive psychology. The findings are anchored in the socio-historical context of early modern publishing, reading, literacy and witchcraft beliefs. The study shows not only how consideration of a new audience by both authors and printers influenced the development of a genre, but also the value of combining visual and linguistic features in pragmatic analyses of texts.
Resumo:
Drug Analysis without Primary Reference Standards: Application of LC-TOFMS and LC-CLND to Biofluids and Seized Material Primary reference standards for new drugs, metabolites, designer drugs or rare substances may not be obtainable within a reasonable period of time or their availability may also be hindered by extensive administrative requirements. Standards are usually costly and may have a limited shelf life. Finally, many compounds are not available commercially and sometimes not at all. A new approach within forensic and clinical drug analysis involves substance identification based on accurate mass measurement by liquid chromatography coupled with time-of-flight mass spectrometry (LC-TOFMS) and quantification by LC coupled with chemiluminescence nitrogen detection (LC-CLND) possessing equimolar response to nitrogen. Formula-based identification relies on the fact that the accurate mass of an ion from a chemical compound corresponds to the elemental composition of that compound. Single-calibrant nitrogen based quantification is feasible with a nitrogen-specific detector since approximately 90% of drugs contain nitrogen. A method was developed for toxicological drug screening in 1 ml urine samples by LC-TOFMS. A large target database of exact monoisotopic masses was constructed, representing the elemental formulae of reference drugs and their metabolites. Identification was based on matching the sample component s measured parameters with those in the database, including accurate mass and retention time, if available. In addition, an algorithm for isotopic pattern match (SigmaFit) was applied. Differences in ion abundance in urine extracts did not affect the mass accuracy or the SigmaFit values. For routine screening practice, a mass tolerance of 10 ppm and a SigmaFit tolerance of 0.03 were established. Seized street drug samples were analysed instantly by LC-TOFMS and LC-CLND, using a dilute and shoot approach. In the quantitative analysis of amphetamine, heroin and cocaine findings, the mean relative difference between the results of LC-CLND and the reference methods was only 11%. In blood specimens, liquid-liquid extraction recoveries for basic lipophilic drugs were first established and the validity of the generic extraction recovery-corrected single-calibrant LC-CLND was then verified with proficiency test samples. The mean accuracy was 24% and 17% for plasma and whole blood samples, respectively, all results falling within the confidence range of the reference concentrations. Further, metabolic ratios for the opioid drug tramadol were determined in a pharmacogenetic study setting. Extraction recovery estimation, based on model compounds with similar physicochemical characteristics, produced clinically feasible results without reference standards.
Resumo:
The structures of (1→3),(1→4)-β-D-glucans of oat bran, whole-grain oats and barley and processed foods were analysed. Various methods of hydrolysis of β-glucan, the content of insoluble fibre of whole grains of oats and barley and the solution behaviour of oat and barley β-glucans were studied. The isolated soluble β-glucans of oat bran and whole-grain oats and barley were hydrolysed with lichenase, an enzyme specific for (1→3),(1→4)-β-D-β-glucans. The amounts of oligosaccharides produced from bran were analysed with capillary electrophoresis and those from whole-grains with high-performance anion-exchange chromatography with pulse-amperometric detection. The main products were 3-O-β-cellobiosyl-D-glucose and 3-O-β-cellotriosyl-D-glucose, the oligosaccharides which have a degree of polymerisation denoted by DP3 and DP4. Small differences were detected between soluble and insoluble β-glucans and also between β-glucans of oats and barley. These differences can only be seen in the DP3:DP4 ratio which was higher for barley than for oat and also higher for insoluble than for soluble β-glucan. A greater proportion of barley β-glucan remained insoluble than of oat β-glucan. The molar masses of soluble β-glucans of oats and barley were the same as were those of insoluble β-glucans of oats and barley. To analyse the effects of cooking, baking, fermentation and drying, β-glucan was isolated from porridge, bread and fermentate and also from their starting materials. More β-glucan was released after cooking and less after baking. Drying decreased the extractability for bread and fermentate but increased it for porridge. Different hydrolysis methods of β-glucan were compared. Acid hydrolysis and the modified AOAC method gave similar results. The results of hydrolysis with lichenase gave higher recoveries than the other two. The combination of lichenase hydrolysis and high-performance anion-exchange chromatography with pulse-amperometric detection was found best for the analysis of β-glucan content. The content of insoluble fibre was higher for barley than for oats and the amount of β-glucan in the insoluble fibre fraction was higher for oats than for barley. The flow properties of both water and aqueous cuoxam solutions of oat and barley β-glucans were studied. Shear thinning was stronger for the water solutions of oat β-glucan than for barley β-glucan. In aqueous cuoxam shear thinning was not observed at the same concentration as in water but only with high concentration solutions. Then the viscosity of barley β-glucan was slightly higher than that of oat β-glucan. The oscillatory measurements showed that the crossover point of the G´ and G´´ curves was much lower for barley β-glucan than for oat β-glucan indicating a higher tendency towards solid-like behaviour for barley β-glucan than for oat β-glucan.
Resumo:
In technicolor theories the scalar sector of the Standard Model is replaced by a strongly interacting sector. Although the Standard Model has been exceptionally successful, the scalar sector causes theoretical problems that make these theories seem an attractive alternative. I begin my thesis by considering QCD, which is the known example of strong interactions. The theory exhibits two phenomena: confinement and chiral symmetry breaking. I find the low-energy dynamics to be similar to that of the sigma models. Then I analyze the problems of the Standard Model Higgs sector, mainly the unnaturalness and triviality. Motivated by the example of QCD, I introduce the minimal technicolor model to resolve these problems. I demonstrate the minimal model to be free of anomalies and then deduce the main elements of its low-energy particle spectrum. I find the particle spectrum contains massless or very light technipions, and also technibaryons and techni-vector mesons with a high mass of over 1 TeV. Standard Model fermions remain strictly massless at this stage. Thus I introduce the technicolor companion theory of flavor, called extended technicolor. I show that the Standard Model fermions and technihadrons receive masses, but that they remain too light. I also discuss flavor-changing neutral currents and precision electroweak measurements. I then show that walking technicolor models partly solve these problems. In these models, contrary to QCD, the coupling evolves slowly over a large energy scale. This behavior adds to the masses so that even the light technihadrons are too heavy to be detected at current particle accelerators. Also all observed masses of the Standard Model particles can be generated, except for the bottom and top quarks. Thus it is shown in this thesis that, excluding the masses of third generation quarks, theories based on walking technicolor can in principle produce the observed particle spectrum.
Resumo:
One of the unanswered questions of modern cosmology is the issue of baryogenesis. Why does the universe contain a huge amount of baryons but no antibaryons? What kind of a mechanism can produce this kind of an asymmetry? One theory to explain this problem is leptogenesis. In the theory right-handed neutrinos with heavy Majorana masses are added to the standard model. This addition introduces explicit lepton number violation to the theory. Instead of producing the baryon asymmetry directly, these heavy neutrinos decay in the early universe. If these decays are CP-violating, then they produce lepton number. This lepton number is then partially converted to baryon number by the electroweak sphaleron process. In this work we start by reviewing the current observational data on the amount of baryons in the universe. We also introduce Sakharov's conditions, which are the necessary criteria for any theory of baryogenesis. We review the current data on neutrino oscillation, and explain why this requires the existence of neutrino mass. We introduce the different kinds of mass terms which can be added for neutrinos, and explain how the see-saw mechanism naturally explains the observed mass scales for neutrinos motivating the addition of the Majorana mass term. After introducing leptogenesis qualitatively, we derive the Boltzmann equations governing leptogenesis, and give analytical approximations for them. Finally we review the numerical solutions for these equations, demonstrating the capability of leptogenesis to explain the observed baryon asymmetry. In the appendix simple Feynman rules are given for theories with interactions between both Dirac- and Majorana-fermions and these are applied at the tree level to calculate the parameters relevant for the theory.
Resumo:
Asymmetrical flow field-flow fractionation (AsFlFFF) was constructed, and its applicability to industrial, biochemical, and pharmaceutical applications was studied. The effect of several parameters, such as pH, ionic strength, temperature and the reactants mixing ratios on the particle sizes, molar masses, and the formation of aggregates of macromolecules was determined by AsFlFFF. In the case of industrial application AsFlFFF proved to be a valuable tool in the characterization of the hydrodynamic particle sizes, molar masses and phase transition behavior of various poly(N-isopropylacrylamide) (PNIPAM) polymers as a function of viscosity and phase transition temperatures. The effect of sodium chloride salt and the molar ratio of cationic and anionic polyelectrolytes on the hydrodynamic particle sizes of poly (methacryloxyethyl trimethylammonium chloride) and poly (ethylene oxide)-block-poly (sodium methacrylate) and their complexes were studied. The particle sizes of PNIPAM polymers, and polyelectrolyte complexes measured by AsFlFFF were in agreement with those obtained by dynamic light scattering. The molar masses of PNIPAM polymers obtained by AsFlFFF and size exclusion chromatography agreed also well. In addition, AsFlFFF proved to be a practical technique in thermo responsive behavior studies of polymers at temperatures up to about 50 oC. The suitability of AsFlFFF for biological, biomedical, and pharmaceutical applications was proved, upon studying the lipid-protein/peptide interactions, and the stability of liposomes at different temperatures. AsFlFFF was applied to the studies on the hydrophobic and electrostatic interactions between cytochrome c (a basic peripheral protein) and anionic lipid, and oleic acid, and sodium dodecyl sulphate surfactant. A miniaturized AsFlFFF constructed in this study was exploited in the elucidation of the effect of copper (II), pH, ionic strength, and vortexing on the particle sizes of low-density lipoproteins.
Resumo:
Recent epidemiological studies have shown a consistent association of the mass concentration of urban air thoracic (PM10) and fine (PM2.5) particles with mortality and morbidity among cardiorespiratory patients. However, the chemical characteristics of different particulate size ranges and the biological mechanisms responsible for these adverse health effects are not well known. The principal aims of this thesis were to validate a high volume cascade impactor (HVCI) for the collection of particulate matter for physicochemical and toxicological studies, and to make an in-depth chemical and source characterisation of samples collected during different pollution situations. The particulate samples were collected with the HVCI, virtual impactors and a Berner low pressure impactor in six European cities: Helsinki, Duisburg, Prague, Amsterdam, Barcelona and Athens. The samples were analysed for particle mass, common ions, total and water-soluble elements as well as elemental and organic carbon. Laboratory calibration and field comparisons indicated that the HVCI can provide a unique large capacity, high efficiency sampling of size-segregated aerosol particles. The cutoff sizes of the recommended HVCI configuration were 2.4, 0.9 and 0.2 μm. The HVCI mass concentrations were in a good agreement with the reference methods, but the chemical composition of especially the fine particulate samples showed some differences. This implies that the chemical characterization of the exposure variable in toxicological studies needs to be done from the same HVCI samples as used in cell and animal studies. The data from parallel, low volume reference samplers provide valuable additional information for chemical mass closure and source assessment. The major components of PM2.5 in the virtual impactor samples were carbonaceous compounds, secondary inorganic ions and sea salt, whereas those of coarse particles (PM2.5-10) were soil-derived compounds, carbonaceous compounds, sea salt and nitrate. The major and minor components together accounted for 77-106% and 77-96% of the gravimetrically-measured masses of fine and coarse particles, respectively. Relatively large differences between sampling campaigns were observed in the organic carbon content of the PM2.5 samples as well as the mineral composition of the PM2.5-10 samples. A source assessment based on chemical tracers suggested clear differences in the dominant sources (e.g. traffic, residential heating with solid fuels, metal industry plants, regional or long-range transport) between the sampling campaigns. In summary, the field campaigns exhibited different profiles with regard to particulate sources, size distribution and chemical composition, thus, providing a highly useful setup for toxicological studies on the size-segregated HVCI samples.
Resumo:
Polymer protected gold nanoparticles have successfully been synthesized by both "grafting-from" and "grafting-to" techniques. The synthesis methods of the gold particles were systematically studied. Two chemically different homopolymers were used to protect gold particles: thermo-responsive poly(N-isopropylacrylamide), PNIPAM, and polystyrene, PS. Both polymers were synthesized by using a controlled/living radical polymerization process, reversible addition-fragmentation chain transfer (RAFT) polymerization, to obtain monodisperse polymers of various molar masses and carrying dithiobenzoate end groups. Hence, particles protected either with PNIPAM, PNIPAM-AuNPs, or with a mixture of two polymers, PNIPAM/PS-AuNPs (i.e., amphiphilic gold nanoparticles), were prepared. The particles contain monodisperse polymer shells, though the cores are somewhat polydisperse. Aqueous PNIPAM-AuNPs prepared using a "grafting-from" technique, show thermo-responsive properties derived from the tethered PNIPAM chains. For PNIPAM-AuNPs prepared using a "grafting-to" technique, two-phase transitions of PNIPAM were observed in the microcalorimetric studies of the aqueous solutions. The first transition with a sharp and narrow endothermic peak occurs at lower temperature, and the second one with a broader peak at higher temperature. In the first transition PNIPAM segments show much higher cooperativity than in the second one. The observations are tentatively rationalized by assuming that the PNIPAM brush can be subdivided into two zones, an inner and an outer one. In the inner zone, the PNIPAM segments are close to the gold surface, densely packed, less hydrated, and undergo the first transition. In the outer zone, on the other hand, the PNIPAM segments are looser and more hydrated, adopt a restricted random coil conformation, and show a phase transition, which is dependent on both particle concentration and the chemical nature of the end groups of the PNIPAM chains. Monolayers of the amphiphilic gold nanoparticles at the air-water interface show several characteristic regions upon compression in a Langmuir trough at room temperature. These can be attributed to the polymer conformational transitions from a pancake to a brush. Also, the compression isotherms show temperature dependence due to the thermo-responsive properties of the tethered PNIPAM chains. The films were successfully deposited on substrates by Langmuir-Blodgett technique. The sessile drop contact angle measurements conducted on both sides of the monolayer deposited at room temperature reveal two slightly different contact angles, that may indicate phase separation between the tethered PNIPAM and PS chains on the gold core. The optical properties of amphiphilic gold nanoparticles were studied both in situ at the air-water interface and on the deposited films. The in situ SPR band of the monolayer shows a blue shift with compression, while a red shift with the deposition cycle occurs in the deposited films. The blue shift is compression-induced and closely related to the conformational change of the tethered PNIPAM chains, which may cause a decrease in the polarity of the local environment of the gold cores. The red shift in the deposited films is due to a weak interparticle coupling between adjacent particles. Temperature effects on the SPR band in both cases were also investigated. In the in situ case, at a constant surface pressure, an increase in temperature leads to a red shift in the SPR, likely due to the shrinking of the tethered PNIPAM chains, as well as to a slight decrease of the distance between the adjacent particles resulting in an increase in the interparticle coupling. However, in the case of the deposited films, the SPR band red-shifts with the deposition cycles more at a high temperature than at a low temperature. This is because the compressibility of the polymer coated gold nanoparticles at a high temperature leads to a smaller interparticle distance, resulting in an increase of the interparticle coupling in the deposited multilayers.
Resumo:
Palaeoenvironments of the latter half of the Weichselian ice age and the transition to the Holocene, from ca. 52 to 4 ka, were investigated using isotopic analysis of oxygen, carbon and strontium in mammal skeletal apatite. The study material consisted predominantly of subfossil bones and teeth of the woolly mammoth (Mammuthus primigenius Blumenbach), collected from Europe and Wrangel Island, northeastern Siberia. All samples have been radiocarbon dated, and their ages range from >52 ka to 4 ka. Altogether, 100 specimens were sampled for the isotopic work. In Europe, the studies focused on the glacial palaeoclimate and habitat palaeoecology. To minimise the influence of possible diagenetic effects, the palaeoclimatological and ecological reconstructions were based on the enamel samples only. The results of the oxygen isotope analysis of mammoth enamel phosphate from Finland and adjacent nortwestern Russia, Estonia, Latvia, Lithuania, Poland, Denmark and Sweden provide the first estimate of oxygen isotope values in glacial precipitation in northern Europe. The glacial precipitation oxygen isotope values range from ca. -9.2±1.5 in western Denmark to -15.3 in Kirillov, northwestern Russia. These values are 0.6-4.1 lower than those in present-day precipitation, with the largest changes recorded in the currently marine influenced southern Sweden and the Baltic region. The new enamel-derived oxygen isotope data from this study, combined with oxygen isotope records from earlier investigations on mammoth tooth enamel and palaeogroundwaters, facilitate a reconstruction of the spatial patterns of the oxygen isotope values of precipitation and palaeotemperatures over much of Europe. The reconstructed geographic pattern of oxygen isotope levels in precipitation during 52-24 ka reflects the progressive isotopic depletion of air masses moving northeast, consistent with a westerly source of moisture for the entire region, and a circulation pattern similar to that of the present-day. The application of regionally varied δ/T-slopes, estimated from palaeogroundwater data and modern spatial correlations, yield reasonable estimates of glacial surface temperatures in Europe and imply 2-9°C lower long-term mean annual surface temperatures during the glacial period. The isotopic composition of carbon in the enamel samples indicates a pure C3 diet for the European mammoths, in agreement with previous investigations of mammoth ecology. A faint geographical gradient in the carbon isotope values of enamel is discernible, with more negative values in the northeast. The spatial trend is consistent with the climatic implications of the enamel oxygen isotope data, but may also suggest regional differences in habitat openness. The palaeogeographical changes caused by the eustatic rise of global sea level at the end of the Weichselian ice age was investigated on Wrangel Island, using the strontium isotope (Sr-87/Sr-86) ratios in the skeletal apatite of the local mammoth fauna. The diagenetic evaluations suggest good preservation of the original Sr isotope ratios, even in the bone specimens included in the study material. To estimate present-day environmental Sr isotope values on Wrangel Island, bioapatite samples from modern reindeer and muskoxen, as well as surface waters from rivers and ice wedges were analysed. A significant shift towards more radiogenic bioapatite Sr isotope ratios, from 0.71218 ± 0.00103 to 0.71491 ± 0.00138, marks the beginning of the Holocene. This implies a change in the migration patterns of the mammals, ultimately reflecting the inundation of the mainland connection and isolation of the population. The bioapatite Sr isotope data supports published coastline reconstructions placing the time of separation from the mainland to ca. 10-10.5 ka ago. The shift towards more radiogenic Sr isotope values in mid-Holocene subfossil remains after 8 ka ago reflects the rapid rise of the sea level from 10 to 8 ka, resulting in a considerable reduction of the accessible range area on the early Wrangel Island.
Resumo:
Wireless technologies are continuously evolving. Second generation cellular networks have gained worldwide acceptance. Wireless LANs are commonly deployed in corporations or university campuses, and their diffusion in public hotspots is growing. Third generation cellular systems are yet to affirm everywhere; still, there is an impressive amount of research ongoing for deploying beyond 3G systems. These new wireless technologies combine the characteristics of WLAN based and cellular networks to provide increased bandwidth. The common direction where all the efforts in wireless technologies are headed is towards an IP-based communication. Telephony services have been the killer application for cellular systems; their evolution to packet-switched networks is a natural path. Effective IP telephony signaling protocols, such as the Session Initiation Protocol (SIP) and the H 323 protocol are needed to establish IP-based telephony sessions. However, IP telephony is just one service example of IP-based communication. IP-based multimedia sessions are expected to become popular and offer a wider range of communication capabilities than pure telephony. In order to conjoin the advances of the future wireless technologies with the potential of IP-based multimedia communication, the next step would be to obtain ubiquitous communication capabilities. According to this vision, people must be able to communicate also when no support from an infrastructured network is available, needed or desired. In order to achieve ubiquitous communication, end devices must integrate all the capabilities necessary for IP-based distributed and decentralized communication. Such capabilities are currently missing. For example, it is not possible to utilize native IP telephony signaling protocols in a totally decentralized way. This dissertation presents a solution for deploying the SIP protocol in a decentralized fashion without support of infrastructure servers. The proposed solution is mainly designed to fit the needs of decentralized mobile environments, and can be applied to small scale ad-hoc networks or also bigger networks with hundreds of nodes. A framework allowing discovery of SIP users in ad-hoc networks and the establishment of SIP sessions among them, in a fully distributed and secure way, is described and evaluated. Security support allows ad-hoc users to authenticate the sender of a message, and to verify the integrity of a received message. The distributed session management framework has been extended in order to achieve interoperability with the Internet, and the native Internet applications. With limited extensions to the SIP protocol, we have designed and experimentally validated a SIP gateway allowing SIP signaling between ad-hoc networks with private addressing space and native SIP applications in the Internet. The design is completed by an application level relay that permits instant messaging sessions to be established in heterogeneous environments. The resulting framework constitutes a flexible and effective approach for the pervasive deployment of real time applications.
Resumo:
Aerosol particles can cause detrimental environmental and health effects. The particles and their precursor gases are emitted from various anthropogenic and natural sources. It is important to know the origin and properties of aerosols to efficiently reduce their harmful effects. The diameter of aerosol particles (Dp) varies between ~0.001 and ~100 μm. Fine particles (PM2.5: Dp < 2.5 μm) are especially interesting because they are the most harmful and can be transported over long distances. The aim of this thesis is to study the impact on air quality by pollution episodes of long-range transported aerosols affecting the composition of the boundary-layer atmosphere in remote and relatively unpolluted regions of the world. The sources and physicochemical properties of aerosols were investigated in detail, based on various measurements (1) in southern Finland during selected long-range transport (LRT) pollution episodes and unpolluted periods and (2) over the Atlantic Ocean between Europe and Antarctica during a voyage. Furthermore, the frequency of LRT pollution episodes of fine particles in southern Finland was investigated over a period of 8 years, using long-term air quality monitoring data. In southern Finland, the annual mean PM2.5 mass concentrations were low but LRT caused high peaks of daily mean concentrations every year. At an urban background site in Helsinki, the updated WHO guideline value (24-h PM2.5 mean 25 μg/m3) was exceeded during 1-7 LRT episodes each year during 1999-2006. The daily mean concentrations varied between 25 and 49 μg/m3 during the episodes, which was 3-6 times higher than the mean concentration in the long term. The in-depth studies of selected LRT episodes in southern Finland revealed that biomass burning in agricultural fields and wildfires, occurring mainly in Eastern Europe, deteriorated air quality on a continental scale. The strongest LRT episodes of fine particles resulted from open biomass-burning fires but the emissions from other anthropogenic sources in Eastern Europe also caused significant LRT episodes. Particle mass and number concentrations increased strongly in the accumulation mode (Dp ~ 0.09-1 μm) during the LRT episodes. However, the concentrations of smaller particles (Dp < 0.09 μm) remained low or even decreased due to the uptake of vapours and molecular clusters by LRT particles. The chemical analysis of individual particles showed that the proportions of several anthropogenic particle types increased (e.g. tar balls, metal oxides/hydroxides, spherical silicate fly ash particles and various calcium-rich particles) in southern Finland during an LRT episode, when aerosols originated from the polluted regions of Eastern Europe and some open biomass-burning smoke was also brought in by LRT. During unpolluted periods when air masses arrived from the north, the proportions of marine aerosols increased. In unpolluted rural regions of southern Finland, both accumulation mode particles and small-sized (Dp ~ 1-3 μm) coarse mode particles originated mostly from LRT. However, the composition of particles was totally different in these size fractions. In both size fractions, strong internal mixing of chemical components was typical for LRT particles. Thus, the aging of particles has significant impacts on their chemical, hygroscopic and optical properties, which can largely alter the environmental and health effects of LRT aerosols. Over the Atlantic Ocean, the individual particle composition of small-sized (Dp ~ 1-3 μm) coarse mode particles was affected by continental aerosol plumes to distances of at least 100-1000 km from the coast (e.g. pollutants from industrialized Europe, desert dust from the Sahara and biomass-burning aerosols near the Gulf of Guinea). The rate of chloride depletion from sea-salt particles was high near the coasts of Europe and Africa when air masses arrived from polluted continental regions. Thus, the LRT of continental aerosols had significant impacts on the composition of the marine boundary-layer atmosphere and seawater. In conclusion, integration of the results obtained using different measurement techniques captured the large spatial and temporal variability of aerosols as observed at terrestrial and marine sites, and assisted in establishing the causal link between land-bound emissions, LRT and air quality.
Resumo:
Aerosol particles in the atmosphere are known to significantly influence ecosystems, to change air quality and to exert negative health effects. Atmospheric aerosols influence climate through cooling of the atmosphere and the underlying surface by scattering of sunlight, through warming of the atmosphere by absorbing sun light and thermal radiation emitted by the Earth surface and through their acting as cloud condensation nuclei. Aerosols are emitted from both natural and anthropogenic sources. Depending on their size, they can be transported over significant distances, while undergoing considerable changes in their composition and physical properties. Their lifetime in the atmosphere varies from a few hours to a week. New particle formation is a result of gas-to-particle conversion. Once formed, atmospheric aerosol particles may grow due to condensation or coagulation, or be removed by deposition processes. In this thesis we describe analyses of air masses, meteorological parameters and synoptic situations to reveal conditions favourable for new particle formation in the atmosphere. We studied the concentration of ultrafine particles in different types of air masses, and the role of atmospheric fronts and cloudiness in the formation of atmospheric aerosol particles. The dominant role of Arctic and Polar air masses causing new particle formation was clearly observed at Hyytiälä, Southern Finland, during all seasons, as well as at other measurement stations in Scandinavia. In all seasons and on multi-year average, Arctic and North Atlantic areas were the sources of nucleation mode particles. In contrast, concentrations of accumulation mode particles and condensation sink values in Hyytiälä were highest in continental air masses, arriving at Hyytiälä from Eastern Europe and Central Russia. The most favourable situation for new particle formation during all seasons was cold air advection after cold-front passages. Such a period could last a few days until the next front reached Hyytiälä. The frequency of aerosol particle formation relates to the frequency of low-cloud-amount days in Hyytiälä. Cloudiness of less than 5 octas is one of the factors favouring new particle formation. Cloudiness above 4 octas appears to be an important factor that prevents particle growth, due to the decrease of solar radiation, which is one of the important meteorological parameters in atmospheric particle formation and growth. Keywords: Atmospheric aerosols, particle formation, air mass, atmospheric front, cloudiness
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
New stars in galaxies form in dense, molecular clouds of the interstellar medium. Measuring how the mass is distributed in these clouds is of crucial importance for the current theories of star formation. This is because several open issues in them, such as the strength of different mechanism regulating star formation and the origin of stellar masses, can be addressed using detailed information on the cloud structure. Unfortunately, quantifying the mass distribution in molecular clouds accurately over a wide spatial and dynamical range is a fundamental problem in the modern astrophysics. This thesis presents studies examining the structure of dense molecular clouds and the distribution of mass in them, with the emphasis on nearby clouds that are sites of low-mass star formation. In particular, this thesis concentrates on investigating the mass distributions using the near infrared dust extinction mapping technique. In this technique, the gas column densities towards molecular clouds are determined by examining radiation from the stars that shine through the clouds. In addition, the thesis examines the feasibility of using a similar technique to derive the masses of molecular clouds in nearby external galaxies. The papers presented in this thesis demonstrate how the near infrared dust extinction mapping technique can be used to extract detailed information on the mass distribution in nearby molecular clouds. Furthermore, such information is used to examine characteristics crucial for the star formation in the clouds. Regarding the use of extinction mapping technique in nearby galaxies, the papers of this thesis show that deriving the masses of molecular clouds using the technique suffers from strong biases. However, it is shown that some structural properties can still be examined with the technique.