27 resultados para scaling
em Helda - Digital Repository of University of Helsinki
Resumo:
Planar curves arise naturally as interfaces between two regions of the plane. An important part of statistical physics is the study of lattice models. This thesis is about the interfaces of 2D lattice models. The scaling limit is an infinite system limit which is taken by letting the lattice mesh decrease to zero. At criticality, the scaling limit of an interface is one of the SLE curves (Schramm-Loewner evolution), introduced by Oded Schramm. This family of random curves is parametrized by a real variable, which determines the universality class of the model. The first and the second paper of this thesis study properties of SLEs. They contain two different methods to study the whole SLE curve, which is, in fact, the most interesting object from the statistical physics point of view. These methods are applied to study two symmetries of SLE: reversibility and duality. The first paper uses an algebraic method and a representation of the Virasoro algebra to find common martingales to different processes, and that way, to confirm the symmetries for polynomial expected values of natural SLE data. In the second paper, a recursion is obtained for the same kind of expected values. The recursion is based on stationarity of the law of the whole SLE curve under a SLE induced flow. The third paper deals with one of the most central questions of the field and provides a framework of estimates for describing 2D scaling limits by SLE curves. In particular, it is shown that a weak estimate on the probability of an annulus crossing implies that a random curve arising from a statistical physics model will have scaling limits and those will be well-described by Loewner evolutions with random driving forces.
Resumo:
We study the energy current in a model of heat conduction, first considered in detail by Casher and Lebowitz. The model consists of a one-dimensional disordered harmonic chain of n i.i.d. random masses, connected to their nearest neighbors via identical springs, and coupled at the boundaries to Langevin heat baths, with respective temperatures T_1 and T_n. Let EJ_n be the steady-state energy current across the chain, averaged over the masses. We prove that EJ_n \sim (T_1 - T_n)n^{-3/2} in the limit n \to \infty, as has been conjectured by various authors over the time. The proof relies on a new explicit representation for the elements of the product of associated transfer matrices.
Resumo:
This study reports a corpus-based study of medieval English herbals, which are texts conveying information on medicinal plants. Herbals belong to the medieval medical register. The study charts intertextual parallels within the medieval genre, and between herbals and other contemporary medical texts. It seeks to answer questions where and how herbal texts are linked to each other, and to other medical writing. The theoretical framework of the study draws on intertextuality and genre studies, manuscript studies, corpus linguistics, and multi-dimensional text analysis. The method combines qualitative and quantitative analyses of textual material from three historical special-language corpora of Middle and Early Modern English, one of which was compiled for the purposes of this study. The text material contains over 800,000 words of medical texts. The time span of the material is from c. 1330 to 1550. Text material is retrieved from the corpora by using plant name lists as search criteria. The raw data is filtered through qualitative analysis which produces input for the quantitative analysis, multi-dimensional scaling (MDS). In MDS, the textual space that parallel text passages form is observed, and the observations are explained by a qualitative analysis. This study concentrates on evidence of material and structural intertextuality. The analysis shows patterns of affinity between the texts of the herbal genre, and between herbals and other texts in the medical register. Herbals are most closely linked with recipe collections and regimens of health: they comprise over 95 per cent of the intertextual links between herbals and other medical writing. Links to surgical texts, or to specialised medical texts are very few. This can be explained by the history of the herbal genre: as herbals carry information on medical ingredients, herbs, they are relevant for genres that are related to pharmacological therapy. Conversely, herbals draw material from recipe collections in order to illustrate the medicinal properties of the herbs they describe. The study points out the close relationship between medical recipes and recipe-like passages in herbals (recipe paraphrases). The examples of recipe paraphrases show that they may have been perceived as indirect instruction. Keywords: medieval herbals, early English medicine, corpus linguistics, intertextuality, manuscript studies
Resumo:
Strategies of scientific, question-driven inquiry are stated to be important cultural practices that should be educated in schools and universities. The present study focuses on investigating multiple efforts to implement a model of Progressive Inquiry and related Web-based tools in primary, secondary and university level education, to develop guidelines for educators in promoting students collaborative inquiry practices with technology. The research consists of four studies. In Study I, the aims were to investigate how a human tutor contributed to the university students collaborative inquiry process through virtual forums, and how the influence of the tutoring activities is demonstrated in the students inquiry discourse. Study II examined an effort to implement technology-enhanced progressive inquiry as a distance working project in a middle school context. Study III examined multiple teachers' methods of organizing progressive inquiry projects in primary and secondary classrooms through a generic analysis framework. In Study IV, a design-based research effort consisting of four consecutive university courses, applying progressive inquiry pedagogy, was retrospectively re-analyzed in order to develop the generic design framework. The results indicate that appropriate teacher support for students collaborative inquiry efforts appears to include interplay between spontaneity and structure. Careful consideration should be given to content mastery, critical working strategies or essential knowledge practices that the inquiry approach is intended to promote. In particular, those elements in students activities should be structured and directed, which are central to the aim of Progressive Inquiry, but which the students do not recognize or demonstrate spontaneously, and which are usually not taken into account in existing pedagogical methods or educational conventions. Such elements are, e.g., productive co-construction activities; sustained engagement in improving produced ideas and explanations; critical reflection of the adopted inquiry practices, and sophisticated use of modern technology for knowledge work. Concerning the scaling-up of inquiry pedagogy, it was concluded that one individual teacher can also apply the principles of Progressive Inquiry in his or her own teaching in many innovative ways, even under various institutional constraints. The developed Pedagogical Infrastructure Framework enabled recognizing and examining some central features and their interplay in the designs of examined inquiry units. The framework may help to recognize and critically evaluate the invisible learning-cultural conventions in various educational settings and can mediate discussions about how to overcome or change them.
Resumo:
Matrix metalloproteinase (MMP) -8, collagenase-2, is a key mediator of irreversible tissue destruction in chronic periodontitis and detectable in gingival crevicular fluid (GCF). MMP-8 mostly originates from neutrophil leukocytes, the first line of defence cells which exist abundantly in GCF, especially in inflammation. MMP-8 is capable of degrading almost all extra-cellular matrix and basement membrane components and is especially efficient against type I collagen. Thus the expression of MMP-8 in GCF could be valuable in monitoring the activity of periodontitis and possibly offers a diagnostic means to predict progression of periodontitis. In this study the value of MMP-8 detection from GCF in monitoring of periodontal health and disease was evaluated with special reference to its ability to differentiate periodontal health and different disease states of the periodontium and to recognise the progression of periodontitis, i.e. active sites. For chair-side detection of MMP-8 from the GCF or peri-implant sulcus fluid (PISF) samples, a dip-stick test based on immunochromatography involving two monoclonal antibodies was developed. The immunoassay for the detection of MMP-8 from GCF was found to be more suitable for monitoring of periodontitis than detection of GCF elastase concentration or activity. Periodontally healthy subjects and individuals suffering of gingivitis or of periodontitis could be differentiated by means of GCF MMP-8 levels and dipstick testing when the positive threshold value of the MMP-8 chair-side test was set at 1000 µg/l. MMP-8 dipstick test results from periodontally healthy and from subjects with gingivitis were mainly negative while periodontitis patients sites with deep pockets ( 5 mm) and which were bleeding on probing were most often test positive. Periodontitis patients GCF MMP-8 levels decreased with hygiene phase periodontal treatment (scaling and root planing, SRP) and even reduced during the three month maintenance phase. A decrease in GCF MMP-8 levels could be monitored with the MMP-8 test. Agreement between the test stick and the quantitative assay was very good (κ = 0.81) and the test provided a baseline sensitivity of 0.83 and specificity of 0.96. During the 12-month longitudinal maintenance phase, periodontitis patients progressing sites (sites with an increase in attachment loss ≥ 2 mm during the maintenance phase) had elevated GCF MMP-8 levels compared with stable sites. General mean MMP-8 concentrations in smokers (S) sites were lower than in non-smokers (NS) sites but in progressing S and NS sites concentrations were at an equal level. Sites with exceptionally and repeatedly elevated MMP-8 concentrations during the maintenance phase were clustered in smoking patients with poor response to SRP (refractory patients). These sites especially were identified by the MMP-8 test. Subgingival plaque samples from periodontitis patients deep periodontal pockets were examined by polymerase chain reaction (PCR) to find out if periodontal lesions may serve as a niche for Chlamydia pneumoniae. Findings were compared with the clinical periodontal parameters and GCF MMP-8 levels to determine the correlation with periodontal status. Traces of C. pneumoniae were identified from one periodontitis patient s pooled subgingival plaque sample by means of PCR. After periodontal treatment (SRP) the sample was negative for C. pneumoniae. Clinical parameters or biomarkers (MMP-8) of the patient with the positive C. pneumoniae finding did not differ from other study patients. In this study it was concluded that MMP-8 concentrations in GCF of sites from periodontally healthy individuals, subjects with gingivitis or with periodontitis are at different levels. The cut-off value of the developed MMP-8 test is at an optimal level to differentiate between these conditions and can possibly be utilised in identification of individuals at the risk of the transition of gingivitis to periodontitis. In periodontitis patients, repeatedly elevated GCF MMP-8 concentrations may indicate sites at risk of progression of periodontitis as well as patients with poor response to conventional periodontal treatment (SRP). This can be monitored by MMP-8 testing. Despite the lower mean GCF MMP-8 concentrations in smokers, a fraction of smokers sites expressed very high MMP-8 concentrations together with enhanced periodontal activity and could be identified with MMP-8 specific chair-side test. Deep periodontal lesions may be niches for non-periodontopathogenic micro-organisms with systemic effects like C. pneumoniae and possibly play a role in the transmission from one subject to another.
Resumo:
The aim of the present study was to assess dental health and its determinants among 15-year-olds in Tehran, Iran and to evaluate the impact of a school-based educational intervention on their oral cleanliness and gingival health. The total sample comprised 506 students. Data collection was performed through a clinical dental examination and a self-administered structured questionnaire. This questionnaire covered the student s background information, socio-economic status, self-perceived dental health, tooth-brushing, and smoking. The clinical dental examination covered caries experience, gingival status, dental plaque status, and orthodontic treatment needs. Participation was voluntary, and all students responded to the questionnaire. Only three students refused the clinical dental examination. The intervention was based on exposing students to dental health education through a leaflet and a videotape designed for the present study. The outcome examinations took place 12 weeks after the baseline among the three groups of the intervention trial (leaflet, videotape, and control). High participation rates at the baseline and scanty drop-outs (7%) in the intervention speak for reliability of the results. Mean value of the DMFT (D=decayed, M=missing, and F=filled teeth) index of the 15-year-olds was 2.1, which comprised DT=0.9, MT=0.2, and FT=1.0 with no gender differences. Dental plaque existed on at least one index tooth of all students, and healthy periodontium (Community Periodontal Index=0) was found in less than 10% of students. Need for caries treatment existed in 40% of students, for scaling in 24%, for oral hygiene instructions in all, and for orthodontic treatment in 26%. Students with the highest level of parents education had fewer dental caries (36% vs. 48%) and less dental plaque (77% vs. 88%). Of all students, 78% assessed their dental health as good or better. Even more of those with their DMFT=0 (73% vs. 27%) and DT=0 (68% vs. 32%) assessed their dental health as good or better. Smokers comprised 5% of the boys and 2% of the girls. Smoking was common among students of less-educated parents (6% vs. 3%). Of all students, 26% reported twice-daily tooth-brushing; girls (38% vs. 15%) and those of higher socio-economic background (33% vs. 17%) did so more frequently. The best predictors for a good level of oral cleanliness were female gender or twice-daily tooth-brushing. The present study demonstrated that a school-based educational intervention can be effective in the short term in improving the oral cleanliness and gingival health of adolescents. At least 50% reduction in numbers of teeth with dental plaque compared to baseline was achieved by 58% of the students in the leaflet group, by 37% in the videotape group, and by 10% of the controls. Corresponding figures for gingival bleeding were 72%, 64%, and 30%. For improving the oral cleanliness and gingival health of adolescents in countries such as Iran with a developing oral health system, school-based educational intervention should be established with focus on oral self-care and oral health education messages. Emphasizing the immediate gains from good oral hygiene, such as fresh breath, clean teeth, and attractive appearance should be key aspects for motivating these adolescents to learn and maintain good dental health, whilst in planning school-based dental health intervention, special attention should be given to boys and those with lower socio-economic status. Author s address: Reza Yazdani, Department of Oral Public Health, Institute of Dentistry, University of Helsinki, P.O. Box 41, FI-00014 Helsinki, Finland. E-mail: reza.yazdani@helsinki.fi
Resumo:
This thesis consists of three articles on passive vector fields in turbulence. The vector fields interact with a turbulent velocity field, which is described by the Kraichnan model. The effect of the Kraichnan model on the passive vectors is studied via an equation for the pair correlation function and its solutions. The first paper is concerned with the passive magnetohydrodynamic equations. Emphasis is placed on the so called "dynamo effect", which in the present context is understood as an unbounded growth of the pair correlation function. The exact analytical conditions for such growth are found in the cases of zero and infinite Prandtl numbers. The second paper contains an extensive study of a number of passive vector models. Emphasis is now on the properties of the (assumed) steady state, namely anomalous scaling, anisotropy and small and large scale behavior with different types of forcing or stirring. The third paper is in many ways a completion to the previous one in its study of the steady state existence problem. Conditions for the existence of the steady state are found in terms of the spatial roughness parameter of the turbulent velocity field.
Resumo:
Volatility is central in options pricing and risk management. It reflects the uncertainty of investors and the inherent instability of the economy. Time series methods are among the most widely applied scientific methods to analyze and predict volatility. Very frequently sampled data contain much valuable information about the different elements of volatility and may ultimately reveal the reasons for time varying volatility. The use of such ultra-high-frequency data is common to all three essays of the dissertation. The dissertation belongs to the field of financial econometrics. The first essay uses wavelet methods to study the time-varying behavior of scaling laws and long-memory in the five-minute volatility series of Nokia on the Helsinki Stock Exchange around the burst of the IT-bubble. The essay is motivated by earlier findings which suggest that different scaling laws may apply to intraday time-scales and to larger time-scales, implying that the so-called annualized volatility depends on the data sampling frequency. The empirical results confirm the appearance of time varying long-memory and different scaling laws that, for a significant part, can be attributed to investor irrationality and to an intraday volatility periodicity called the New York effect. The findings have potentially important consequences for options pricing and risk management that commonly assume constant memory and scaling. The second essay investigates modelling the duration between trades in stock markets. Durations convoy information about investor intentions and provide an alternative view at volatility. Generalizations of standard autoregressive conditional duration (ACD) models are developed to meet needs observed in previous applications of the standard models. According to the empirical results based on data of actively traded stocks on the New York Stock Exchange and the Helsinki Stock Exchange the proposed generalization clearly outperforms the standard models and also performs well in comparison to another recently proposed alternative to the standard models. The distribution used to derive the generalization may also prove valuable in other areas of risk management. The third essay studies empirically the effect of decimalization on volatility and market microstructure noise. Decimalization refers to the change from fractional pricing to decimal pricing and it was carried out on the New York Stock Exchange in January, 2001. The methods used here are more accurate than in the earlier studies and put more weight on market microstructure. The main result is that decimalization decreased observed volatility by reducing noise variance especially for the highly active stocks. The results help risk management and market mechanism designing.
Resumo:
Lead contamination in the environment is of particular concern, as it is a known toxin. Until recently, however, much less attention has been given to the local contamination caused by activities at shooting ranges compared to large-scale industrial contamination. In Finland, more than 500 tons of Pb is produced each year for shotgun ammunition. The contaminant threatens various organisms, ground water and the health of human populations. However, the forest at shooting ranges usually shows no visible sign of stress compared to nearby clean environments. The aboveground biota normally reflects the belowground ecosystem. Thus, the soil microbial communities appear to bear strong resistance to contamination, despite the influence of lead. The studies forming this thesis investigated a shooting range site at Hälvälä in Southern Finland, which is heavily contaminated by lead pellets. Previously it was experimentally shown that the growth of grasses and degradation of litter are retarded. Measurements of acute toxicity of the contaminated soil or soil extracts gave conflicting results, as enchytraeid worms used as toxicity reporters were strongly affected, while reporter bacteria showed no or very minor decreases in viability. Measurements using sensitive inducible luminescent reporter bacteria suggested that the bioavailability of lead in the soil is indeed low, and this notion was supported by the very low water extractability of the lead. Nevertheless, the frequency of lead-resistant cultivable bacteria was elevated based on the isolation of cultivable strains. The bacterial and fungal diversity in heavily lead contaminated shooting sectors were compared with those of pristine sections of the shooting range area. The bacterial 16S rRNA gene and fungal ITS rRNA gene were amplified, cloned and sequenced using total DNA extracted from the soil humus layer as the template. Altogether, 917 sequenced bacterial clones and 649 sequenced fungal clones revealed a high soil microbial diversity. No effect of lead contamination was found on bacterial richness or diversity, while fungal richness and diversity significantly differed between lead contaminated and clean control areas. However, even in the case of fungi, genera that were deemed sensitive were not totally absent from the contaminated area: only their relative frequency was significantly reduced. Some operational taxonomic units (OTUs) assigned to Basidiomycota were clearly affected, and were much rarer in the lead contaminated areas. The studies of this thesis surveyed EcM sporocarps, analyzed morphotyped EcM root tips by direct sequencing, and 454-pyrosequenced fungal communities in in-growth bags. A total of 32 EcM fungi that formed conspicuous sporocarps, 27 EcM fungal OTUs from 294 root tips, and 116 EcM fungal OTUs from a total of 8 194 ITS2 454 sequences were recorded. The ordination analyses by non-parametric multidimensional scaling (NMS) indicated that Pb enrichment induced a shift in the EcM community composition. This was visible as indicative trends in the sporocarp and root tip datasets, but explicitly clear in the communities observed in the in-growth bags. The compositional shift in the EcM community was mainly attributable to an increase in the frequencies of OTUs assigned to the genus Thelephora, and to a decrease in the OTUs assigned to Pseudotomentella, Suillus and Tylospora in Pb-contaminated areas when compared to the control. The enrichment of Thelephora in contaminated areas was also observed when examining the total fungal communities in soil using DNA cloning and sequencing technology. While the compositional shifts are clear, their functional consequences for the dominant trees or soil ecosystem remain undetermined. The results indicate that at the Hälvälä shooting range, lead influences the fungal communities but not the bacterial communities. The forest ecosystem shows apparent functional redundancy, since no significant effects were seen on forest trees. Recently, by means of 454 pyrosequencing , the amount of sequences in a single analysis run can be up to one million. It has been applied in microbial ecology studies to characterize microbial communities. The handling of sequence data with traditional programs is becoming difficult and exceedingly time consuming, and novel tools are needed to handle the vast amounts of data being generated. The field of microbial ecology has recently benefited from the availability of a number of tools for describing and comparing microbial communities using robust statistical methods. However, although these programs provide methods for rapid calculation, it has become necessary to make them more amenable to larger datasets and numbers of samples from pyrosequencing. As part of this thesis, a new program was developed, MuSSA (Multi-Sample Sequence Analyser), to handle sequence data from novel high-throughput sequencing approaches in microbial community analyses. The greatest advantage of the program is that large volumes of sequence data can be manipulated, and general OTU series with a frequency value can be calculated among a large number of samples.
Resumo:
Phytoplankton ecology and productivity is one of the main branches of contemporary oceanographic research. Research groups in this branch have increasingly started to utilise bio-optical applications. My main research objective was to critically investigate the advantages and deficiencies of the fast repetition rate (FRR) fluorometry for studies of productivity of phytoplankton, and the responses of phytoplankton towards varying environmental stress. Second, I aimed to clarify the applicability of the FRR system to the optical environment of the Baltic Sea. The FRR system offers a highly dynamic tool for studies of phytoplankton photophysiology and productivity both in the field and in a controlled environment. The FRR metrics obtain high-frequency in situ determinations of the light-acclimative and photosynthetic parameters of intact phytoplankton communities. The measurement protocol is relatively easy to use without phases requiring analytical determinations. The most notable application of the FRR system lies in its potential for making primary productivity (PP) estimations. However, the realisation of this scheme is not straightforward. The FRR-PP, based on the photosynthetic electron flow (PEF) rate, are linearly related to the photosynthetic gas exchange (fixation of 14C) PP only in environments where the photosynthesis is light-limited. If the light limitation is not present, as is usually the case in the near-surface layers of the water column, the two PP approaches will deviate. The prompt response of the PEF rate to the short-term variability in the natural light field makes the field comparisons between the PEF-PP and the 14C-PP difficult to interpret, because this variability is averaged out in the 14C-incubations. Furthermore, the FRR based PP models are tuned to closely follow the vertical pattern of the underwater irradiance. Due to the photoacclimational plasticity of phytoplankton, this easily leads to overestimates of water column PP, if precautionary measures are not taken. Natural phytoplankton is subject to broad-waveband light. Active non-spectral bio-optical instruments, like the FRR fluorometer, emit light in a relatively narrow waveband, which by its nature does not represent the in situ light field. Thus, the spectrally-dependent parameters provided by the FRR system need to be spectrally scaled to the natural light field of the Baltic Sea. In general, the requirement of spectral scaling in the water bodies under terrestrial impact concerns all light-adaptive parameters provided by any active non-spectral bio-optical technique. The FRR system can be adopted to studies of all phytoplankton that possess efficient light harvesting in the waveband matching the bluish FRR excitation. Although these taxa cover the large bulk of all the phytoplankton taxa, one exception with a pronounced ecological significance is found in the Baltic Sea. The FRR system cannot be used to monitor the photophysiology of the cyanobacterial taxa harvesting light in the yellow-red waveband. These taxa include the ecologically-significant bloom-forming cyanobacterial taxa in the Baltic Sea.
Resumo:
Industrial ecology is an important field of sustainability science. It can be applied to study environmental problems in a policy relevant manner. Industrial ecology uses ecosystem analogy; it aims at closing the loop of materials and substances and at the same time reducing resource consumption and environmental emissions. Emissions from human activities are related to human interference in material cycles. Carbon (C), nitrogen (N) and phosphorus (P) are essential elements for all living organisms, but in excess have negative environmental impacts, such as climate change (CO2, CH4 N2O), acidification (NOx) and eutrophication (N, P). Several indirect macro-level drivers affect emissions change. Population and affluence (GDP/capita) often act as upward drivers for emissions. Technology, as emissions per service used, and consumption, as economic intensity of use, may act as drivers resulting in a reduction in emissions. In addition, the development of country-specific emissions is affected by international trade. The aim of this study was to analyse changes in emissions as affected by macro-level drivers in different European case studies. ImPACT decomposition analysis (IPAT identity) was applied as a method in papers I III. The macro-level perspective was applied to evaluate CO2 emission reduction targets (paper II) and the sharing of greenhouse gas emission reduction targets (paper IV) in the European Union (EU27) up to the year 2020. Data for the study were mainly gathered from official statistics. In all cases, the results were discussed from an environmental policy perspective. The development of nitrogen oxide (NOx) emissions was analysed in the Finnish energy sector during a long time period, 1950 2003 (paper I). Finnish emissions of NOx began to decrease in the 1980s as the progress in technology in terms of NOx/energy curbed the impact of the growth in affluence and population. Carbon dioxide (CO2) emissions related to energy use during 1993 2004 (paper II) were analysed by country and region within the European Union. Considering energy-based CO2 emissions in the European Union, dematerialization and decarbonisation did occur, but not sufficiently to offset population growth and the rapidly increasing affluence during 1993 2004. The development of nitrogen and phosphorus load from aquaculture in relation to salmonid consumption in Finland during 1980 2007 was examined, including international trade in the analysis (paper III). A regional environmental issue, eutrophication of the Baltic Sea, and a marginal, yet locally important source of nutrients was used as a case. Nutrient emissions from Finnish aquaculture decreased from the 1990s onwards: although population, affluence and salmonid consumption steadily increased, aquaculture technology improved and the relative share of imported salmonids increased. According to the sustainability challenge in industrial ecology, the environmental impact of the growing population size and affluence should be compensated by improvements in technology (emissions/service used) and with dematerialisation. In the studied cases, the emission intensity of energy production could be lowered for NOx by cleaning the exhaust gases. Reorganization of the structure of energy production as well as technological innovations will be essential in lowering the emissions of both CO2 and NOx. Regarding the intensity of energy use, making the combustion of fuels more efficient and reducing energy use are essential. In reducing nutrient emissions from Finnish aquaculture to the Baltic Sea (paper III) through technology, limits of biological and physical properties of cultured fish, among others, will eventually be faced. Regarding consumption, salmonids are preferred to many other protein sources. Regarding trade, increasing the proportion of imports will outsource the impacts. Besides improving technology and dematerialization, other viewpoints may also be needed. Reducing the total amount of nutrients cycling in energy systems and eventually contributing to NOx emissions needs to be emphasized. Considering aquaculture emissions, nutrient cycles can be partly closed through using local fish as feed replacing imported feed. In particular, the reduction of CO2 emissions in the future is a very challenging task when considering the necessary rates of dematerialisation and decarbonisation (paper II). Climate change mitigation may have to focus on other greenhouse gases than CO2 and on the potential role of biomass as a carbon sink, among others. The global population is growing and scaling up the environmental impact. Population issues and growing affluence must be considered when discussing emission reductions. Climate policy has only very recently had an influence on emissions, and strong actions are now called for climate change mitigation. Environmental policies in general must cover all the regions related to production and impacts in order to avoid outsourcing of emissions and leakage effects. The macro-level drivers affecting changes in emissions can be identified with the ImPACT framework. Statistics for generally known macro-indicators are currently relatively well available for different countries, and the method is transparent. In the papers included in this study, a similar method was successfully applied in different types of case studies. Using transparent macro-level figures and a simple top-down approach are also appropriate in evaluating and setting international emission reduction targets, as demonstrated in papers II and IV. The projected rates of population and affluence growth are especially worth consideration in setting targets. However, sensitivities in calculations must be carefully acknowledged. In the basic form of the ImPACT model, the economic intensity of consumption and emission intensity of use are included. In seeking to examine consumption but also international trade in more detail, imports were included in paper III. This example demonstrates well how outsourcing of production influences domestic emissions. Country-specific production-based emissions have often been used in similar decomposition analyses. Nevertheless, trade-related issues must not be ignored.
Resumo:
The Earth's ecosystems are protected from the dangerous part of the solar ultraviolet (UV) radiation by stratospheric ozone, which absorbs most of the harmful UV wavelengths. Severe depletion of stratospheric ozone has been observed in the Antarctic region, and to a lesser extent in the Arctic and midlatitudes. Concern about the effects of increasing UV radiation on human beings and the natural environment has led to ground based monitoring of UV radiation. In order to achieve high-quality UV time series for scientific analyses, proper quality control (QC) and quality assurance (QA) procedures have to be followed. In this work, practices of QC and QA are developed for Brewer spectroradiometers and NILU-UV multifilter radiometers, which measure in the Arctic and Antarctic regions, respectively. These practices are applicable to other UV instruments as well. The spectral features and the effect of different factors affecting UV radiation were studied for the spectral UV time series at Sodankylä. The QA of the Finnish Meteorological Institute's (FMI) two Brewer spectroradiometers included daily maintenance, laboratory characterizations, the calculation of long-term spectral responsivity, data processing and quality assessment. New methods for the cosine correction, the temperature correction and the calculation of long-term changes of spectral responsivity were developed. Reconstructed UV irradiances were used as a QA tool for spectroradiometer data. The actual cosine correction factor was found to vary between 1.08-1.12 and 1.08-1.13. The temperature characterization showed a linear temperature dependence between the instrument's internal temperature and the photon counts per cycle. Both Brewers have participated in international spectroradiometer comparisons and have shown good stability. The differences between the Brewers and the portable reference spectroradiometer QASUME have been within 5% during 2002-2010. The features of the spectral UV radiation time series at Sodankylä were analysed for the time period 1990-2001. No statistically significant long-term changes in UV irradiances were found, and the results were strongly dependent on the time period studied. Ozone was the dominant factor affecting UV radiation during the springtime, whereas clouds played a more important role during the summertime. During this work, the Antarctic NILU-UV multifilter radiometer network was established by the Instituto Nacional de Meteorogía (INM) as a joint Spanish-Argentinian-Finnish cooperation project. As part of this work, the QC/QA practices of the network were developed. They included training of the operators, daily maintenance, regular lamp tests and solar comparisons with the travelling reference instrument. Drifts of up to 35% in the sensitivity of the channels of the NILU-UV multifilter radiometers were found during the first four years of operation. This work emphasized the importance of proper QC/QA, including regular lamp tests, for the multifilter radiometers also. The effect of the drifts were corrected by a method scaling the site NILU-UV channels to those of the travelling reference NILU-UV. After correction, the mean ratios of erythemally-weighted UV dose rates measured during solar comparisons between the reference NILU-UV and the site NILU-UVs were 1.007±0.011 and 1.012±0.012 for Ushuaia and Marambio, respectively, when the solar zenith angle varied up to 80°. Solar comparisons between the NILU-UVs and spectroradiometers showed a ±5% difference near local noon time, which can be seen as proof of successful QC/QA procedures and transfer of irradiance scales. This work also showed that UV measurements made in the Arctic and Antarctic can be comparable with each other.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Silicon particle detectors are used in several applications and will clearly require better hardness against particle radiation in the future large scale experiments than can be provided today. To achieve this goal, more irradiation studies with defect generating bombarding particles are needed. Protons can be considered as important bombarding species, although neutrons and electrons are perhaps the most widely used particles in such irradiation studies. Protons provide unique possibilities, as their defect production rates are clearly higher than those of neutrons and electrons, and, their damage creation in silicon is most similar to the that of pions. This thesis explores the development and testing of an irradiation facility that provides the cooling of the detector and on-line electrical characterisation, such as current-voltage (IV) and capacitance-voltage (CV) measurements. This irradiation facility, which employs a 5-MV tandem accelerator, appears to function well, but some disadvantageous limitations are related to MeV-proton irradiation of silicon particle detectors. Typically, detectors are in non-operational mode during irradiation (i.e., without the applied bias voltage). However, in real experiments the detectors are biased; the ionising proton generates electron-hole pairs, and a rise in rate of proton flux may cause the detector to breakdown. This limits the proton flux for the irradiation of biased detectors. In this work, it is shown that, if detectors are irradiated and kept operational, the electric field decreases the introduction rate of negative space-charges and current-related damage. The effects of various particles with different energies are scaled to each others by the non-ionising energy loss (NIEL) hypothesis. The type of defects induced by irradiation depends on the energy used, and this thesis also discusses the minimum proton energy required at which the NIEL-scaling is valid.
Resumo:
This thesis concerns the dynamics of nanoparticle impacts on solid surfaces. These impacts occur, for instance, in space, where micro- and nanometeoroids hit surfaces of planets, moons, and spacecraft. On Earth, materials are bombarded with nanoparticles in cluster ion beam devices, in order to clean or smooth their surfaces, or to analyse their elemental composition. In both cases, the result depends on the combined effects of countless single impacts. However, the dynamics of single impacts must be understood before the overall effects of nanoparticle radiation can be modelled. In addition to applications, nanoparticle impacts are also important to basic research in the nanoscience field, because the impacts provide an excellent case to test the applicability of atomic-level interaction models to very dynamic conditions. In this thesis, the stopping of nanoparticles in matter is explored using classical molecular dynamics computer simulations. The materials investigated are gold, silicon, and silica. Impacts on silicon through a native oxide layer and formation of complex craters are also simulated. Nanoparticles up to a diameter of 20 nm (315000 atoms) were used as projectiles. The molecular dynamics method and interatomic potentials for silicon and gold are examined in this thesis. It is shown that the displacement cascade expansionmechanism and crater crown formation are very sensitive to the choice of atomic interaction model. However, the best of the current interatomic models can be utilized in nanoparticle impact simulation, if caution is exercised. The stopping of monatomic ions in matter is understood very well nowadays. However, interactions become very complex when several atoms impact on a surface simultaneously and within a short distance, as happens in a nanoparticle impact. A high energy density is deposited in a relatively small volume, which induces ejection of material and formation of a crater. Very high yields of excavated material are observed experimentally. In addition, the yields scale nonlinearly with the cluster size and impact energy at small cluster sizes, whereas in macroscopic hypervelocity impacts, the scaling 2 is linear. The aim of this thesis is to explore the atomistic mechanisms behind the nonlinear scaling at small cluster sizes. It is shown here that the nonlinear scaling of ejected material yield disappears at large impactor sizes because the stopping mechanism of nanoparticles gradually changes to the same mechanism as in macroscopic hypervelocity impacts. The high yields at small impactor size are due to the early escape of energetic atoms from the hot region. In addition, the sputtering yield is shown to depend very much on the spatial initial energy and momentum distributions that the nanoparticle induces in the material in the first phase of the impact. At the later phases, the ejection of material occurs by several mechanisms. The most important mechanism at high energies or at large cluster sizes is atomic cluster ejection from the transient liquid crown that surrounds the crater. The cluster impact dynamics detected in the simulations are in agreement with several recent experimental results. In addition, it is shown that relatively weak impacts can induce modifications on the surface of an amorphous target over a larger area than was previously expected. This is a probable explanation for the formation of the complex crater shapes observed on these surfaces with atomic force microscopy. Clusters that consist of hundreds of thousands of atoms induce long-range modifications in crystalline gold.