17 resultados para sampling of literature papers

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Earth s climate is a highly dynamic and complex system in which atmospheric aerosols have been increasingly recognized to play a key role. Aerosol particles affect the climate through a multitude of processes, directly by absorbing and reflecting radiation and indirectly by changing the properties of clouds. Because of the complexity, quantification of the effects of aerosols continues to be a highly uncertain science. Better understanding of the effects of aerosols requires more information on aerosol chemistry. Before the determination of aerosol chemical composition by the various available analytical techniques, aerosol particles must be reliably sampled and prepared. Indeed, sampling is one of the most challenging steps in aerosol studies, since all available sampling techniques harbor drawbacks. In this study, novel methodologies were developed for sampling and determination of the chemical composition of atmospheric aerosols. In the particle-into-liquid sampler (PILS), aerosol particles grow in saturated water vapor with further impaction and dissolution in liquid water. Once in water, the aerosol sample can then be transported and analyzed by various off-line or on-line techniques. In this study, PILS was modified and the sampling procedure was optimized to obtain less altered aerosol samples with good time resolution. A combination of denuders with different coatings was tested to adsorb gas phase compounds before PILS. Mixtures of water with alcohols were introduced to increase the solubility of aerosols. Minimum sampling time required was determined by collecting samples off-line every hour and proceeding with liquid-liquid extraction (LLE) and analysis by gas chromatography-mass spectrometry (GC-MS). The laboriousness of LLE followed by GC-MS analysis next prompted an evaluation of solid-phase extraction (SPE) for the extraction of aldehydes and acids in aerosol samples. These two compound groups are thought to be key for aerosol growth. Octadecylsilica, hydrophilic-lipophilic balance (HLB), and mixed phase anion exchange (MAX) were tested as extraction materials. MAX proved to be efficient for acids, but no tested material offered sufficient adsorption for aldehydes. Thus, PILS samples were extracted only with MAX to guarantee good results for organic acids determined by liquid chromatography-mass spectrometry (HPLC-MS). On-line coupling of SPE with HPLC-MS is relatively easy, and here on-line coupling of PILS with HPLC-MS through the SPE trap produced some interesting data on relevant acids in atmospheric aerosol samples. A completely different approach to aerosol sampling, namely, differential mobility analyzer (DMA)-assisted filter sampling, was employed in this study to provide information about the size dependent chemical composition of aerosols and understanding of the processes driving aerosol growth from nano-size clusters to climatically relevant particles (>40 nm). The DMA was set to sample particles with diameters of 50, 40, and 30 nm and aerosols were collected on teflon or quartz fiber filters. To clarify the gas-phase contribution, zero gas-phase samples were collected by switching off the DMA every other 15 minutes. Gas-phase compounds were adsorbed equally well on both types of filter, and were found to contribute significantly to the total compound mass. Gas-phase adsorption is especially significant during the collection of nanometer-size aerosols and needs always to be taken into account. Other aims of this study were to determine the oxidation products of β-caryophyllene (the major sesquiterpene in boreal forest) in aerosol particles. Since reference compounds are needed for verification of the accuracy of analytical measurements, three oxidation products of β-caryophyllene were synthesized: β-caryophyllene aldehyde, β-nocaryophyllene aldehyde, and β-caryophyllinic acid. All three were identified for the first time in ambient aerosol samples, at relatively high concentrations, and their contribution to the aerosol mass (and probably growth) was concluded to be significant. Methodological and instrumental developments presented in this work enable fuller understanding of the processes behind biogenic aerosol formation and provide new tools for more precise determination of biosphere-atmosphere interactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimuksessa analysoidaan kaaosteorian vaikutusta kaunokirjallisuudessa ja kirjallisuudentutkimuksessa ja esitetään, että kaaosteorian roolia kirjallisuuden kentällä voidaan parhaiten ymmärtää sen avaamien käsitteiden kautta. Suoran soveltamisen sijaan kaaosteorian avulla on käyty uudenlaisia keskusteluja vanhoista aiheista ja luonnontieteestä ammennetut käsitteet ovat johtaneet aiemmin tukkeutuneiden argumenttien avaamiseen uudesta näkökulmasta käsin. Väitöskirjassa keskitytään kolmeen osa-alueeseen: kaunokirjallisen teoksen rakenteen teoretisointiin, ihmisen (erityisesti tekijän) identiteetin hahmottamiseen ja kuvailemiseen sekä fiktion ja todellisuuden suhteen pohdintaan. Tutkimuksen tarkoituksena on osoittaa, kuinka kaaosteorian kautta näitä aiheita on lähestytty niin kirjallisuustieteessä kuin kaunokirjallisissa teoksissakin. Väitöskirjan keskiössä ovat romaanikirjailija John Barthin, dramatisti Tom Stoppardin ja runoilija Jorie Grahamin teosten analyysit. Nämä kirjailijat ammentavat kaaosteoriasta keinoja käsitteellistää rakenteita, jotka ovat yhtä aikaa dynaamisia prosesseja ja hahmotettavia muotoja. Kaunokirjallisina teemoina nousevat esiin myös ihmisen paradoksaalisesti tunnistettava ja aina muuttuva identiteetti sekä lopullista haltuunottoa pakeneva, mutta silti kiehtova ja tavoiteltava todellisuus. Näiden kirjailijoiden teosten analyysin sekä teoreettisen keskustelun kautta väitöskirjassa tuodaan esiin aiemmassa tutkimuksessa varjoon jäänyt, koherenssia, ymmärrettävyyttä ja realismia painottava humanistinen näkökulma kaaosteorian merkityksestä kirjallisuudessa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study concentrates on the contested concept of pastiche in literary studies. It offers the first detailed examination of the history of the concept from its origins in the seventeenth century to the present, showing how pastiche emerged as a critical concept in interaction with the emerging conception of authorial originality and the copyright laws protecting it. One of the key results of this investigation is the contextualisation of the postmodern debate on pastiche. Even though postmodern critics often emphasise the radical novelty of pastiche, they in fact resuscitate older positions and arguments without necessarily reflecting on their historical conditions. This historical background is then used to analyse the distinction between the primarily French conception of pastiche as the imitation of style and the postmodern notion of it as the compilation of different elements. The latter s vagueness and inclusiveness detracts from its value as a critical concept. The study thus concentrates on the notion of stylistic pastiche, challenging the widespread prejudice that it is merely an indication of lack of talent. Because it is multiply based on repetition, pastiche is in fact a highly ambiguous or double-edged practice that calls into question the distinction between repetition and original, thereby undermining the received notion of individual unique authorship as a fundamental aesthetic value. Pastiche does not, however, constitute a radical upheaval of the basic assumptions on which the present institution of literature relies, since, in order to mark its difference, pastiche always refers to a source outside itself against which its difference is measured. Finally, the theoretical analysis of pastiche is applied to literary works. The pastiches written by Marcel Proust demonstrate how it can become an integral part of a writer s poetics: imitation of style is shown to provide Proust with a way of exploring the role of style as a connecting point between inner vision and reality. The pastiches of the Sherlock Holmes stories by Michael Dibdin, Nicholas Meyer and the duo Adrian Conan Doyle and John Dickson Carr illustrate the functions of pastiche within a genre detective fiction that is itself fundamentally repetitive. A.S. Byatt s Possession and D.M. Thomas s Charlotte use Victorian pastiches to investigate the conditions of literary creation in the age of postmodern suspicion of creativity and individuality. The study thus argues that the concept of pastiche has valuable insights to offer to literary criticism and theory, and that literary pastiches, though often dismissed in reviews and criticism, are a particularly interesting object of study precisely because of their characteristic ambiguity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The World Wide Web provides the opportunity for a radically changed and much more efficient communication process for scientific results. A survey in the closely related domains of construction information technology and construction management was conducted in February 2000, aimed at measuring to what extent these opportunities are already changing the scientific information exchange and how researchers feel about the changes. The paper presents the results based on 236 replies to an extensive Web based questionnaire. 65% of the respondents stated their primary research interest as IT in A/E/C and 20% as construction management and economics. The questions dealt with how researchers find, access and read different sources; how much and what publications they read; how often and to which conferences they travel; how much they publish, and what are the criteria for where they eventually decide to publish. Some of the questions confronted traditional and electronic publishing with one final section dedicated to opinions about electronic publishing. According to the survey researchers already download half of the material that they read digitally from the Web. The most popular method for retrieving an interesting publication is downloading it for free from the author’s or publisher’s website. Researchers are not particularly willing to pay for electronic scientific publications. There is much support for a scenario of electronic journals available totally freely on the Web, where the costs could be covered by for instance professional societies or the publishing university. The shift that the Web is causing seems to be towards the "just in time" reading of literature. Also, frequent users of the Web rely less on scientific publications and tend to read fewer articles. If available with little effort, papers published in traditional journals are preferred; if not, the papers should be on the Web. In these circumstances, the role of paper-based journals published by established publishers is shifting from the core "information exchange" to the building of authors' prestige. The respondents feel they should build up their reputations by publishing in journals and relevant conferences, but then make their work freely available on the Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Both management scholars and economic geographers have studied knowledge and argued that the ability to transfer knowledge is critical to competitive success. Networks and other forms for cooperation are often the context when analyzing knowledge transfer within management research, while economic geographers focus on the role of the cluster for knowledge transfer and creation. With the common interest in knowledge transfer, few attempts to interdisciplinary research have been made. The aim of this paper is to outline the knowledge transfer concepts in the two strands of literature of management and economic geography (EG). The paper takes an analytical approach to review the existing contributions and seek to identify the benefits of further interaction between the disciplines. Furthermore, it offers an interpretation of the concepts of cluster and network, and suggests a clearer distinction between their respective definitions. The paper posits that studies of internal networks transcending national borders and clusters are not necessarily mutually exclusive when it comes to transfer of knowledge and the learning process of the firm. Our conclusion is that researchers in general seem to increasingly acknowledge the importance of studying both the effect of and the need for geographical proximity and external networks for the knowledge transfer process, but that there exists equivocalness in defining clusters and networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large volume of literature suggests that information asymmetry resulting from the spatial separation between investors and investments have a significant impact on the composition of investors’ domestic and international portfolios. I show that institutional factors affecting trading in tangible goods help explain a substantial portion of investors’ spatial bias. More importantly, I demonstrate that an information flow medium with breadth and richness directly linked to the bilateral commitment of resources between countries, that I measure by their trading intensity in tangible goods, is consistent with the prevailing country allocation in investors’ international portfolios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work investigates the role of narrative literature in late-20th century and contemporary Anglo-American moral philosophy. It aims to show the trend of reading narrative literature for purposes of moral philosophy from the 1970 s and early 80 s to the present day as a part of a larger movement in Anglo-American moral philosophy, and to present a view of its significance for moral philosophy overall. Chapter 1 provides some preliminaries concerning the view of narrative literature which my discussion builds on. In chapter 2 I give an outline of how narrative literature is considered in contemporary Anglo-American moral philosophy, and connect this use to the broad trend of neo-Aristotelian ethics in this context. In chapter 3 I connect the use of literature to the idea of the non-generalizability of moral perception and judgment, which is central to the neo-Aristotelian trend, as well as to a range of moral particularisms and anti-theoretical positions of late 20th century and contemporary ethics. The joint task of chapters 2 and 3 is to situate the trend of reading narrative literature for the purposes of moral philosophy in the present context of moral philosophy. In the following two chapters, 4 and 5, I move on from the particularizing power of narrative literature, which is emphasized by neo-Aristotelians and particularists alike, to a broader under-standing of the intellectual potential of narrative literature. In chapter 4 I argue that narrative literature has its own forms of generalization which are enriching for our understanding of the workings of ethical generalizations in philosophy. In chapter 5 I discuss Iris Murdoch s and Martha Nussbaum s respective ways of combining ethical generality and particularity in a philosophical framework where both systematic moral theory and narrative literature are taken seriously. In chapter 6 I analyse the controversy between contemporary anti-theoretical conceptions of ethics and Nussbaum s refutation of these. I present my suggestion for how the significance of the ethics/literature discussion for moral philosophy can be understood if one wants to overcome the limitations of both Nussbaum s theory-centred, equilibrium-seeking perspective, and the anti-theorists repudiation of theory. I call my position the inclusive approach .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is an ethnographic case study of the creation and emergence of a playworld – a pedagogical approach aimed at promoting children’s development and learning in early education settings through the use of play and drama. The data was collected in a Finnish experimental mixed-age elementary school classroom in the school year 2003-2004. In the playworld students and teachers explore different social and cultural phenomena through taking on the roles of characters from a story or a piece of literature and acting inside the frames of an improvised plot. The thesis takes under scrutiny the notion of agency in education. It produces theoretically grounded empirical knowledge of the ways in which children struggle to become recognized and agentive actors in early education settings and how their agency develops in their interaction with adults. The study builds on the activity theoretical and sociocultural tradition and develops a methodological framework called video-based narrative interaction analysis for studying student agency as developing over time but manifesting through the situational material and discursive local interactions. The research questions are: 1. What are the children’s ways of enacting their agency in the playworld? 2. How do the children’s agentive actions change and develop over the spring? 3. What are the potentials and challenges of the playworld for promoting student agency? 4. How do the teachers and the children deal with the contradiction between control and agency in the playworld? The study consists of a summary part and four empirical articles which each have a particular viewpoint. Articles I and II deal with individual students’ paths to agency. In Article I the focus is on the role of resistance and questioning in enabling important spaces for agency. Article II takes a critical gender perspective and analyzes how two girls struggled towards recognition in the playworld. It also illuminates the role of imagination in developing a sense of agency. Article III examines how the open-ended and improvisational nature of the playworld interaction provided experiences and a sense of ‘shared agency’ for the students and teachers in the class. Article IV turns the focus on the teachers and analyzes how their role actions in the playworld helped the children to enact agency. It also discusses the challenges that the teachers faced in this work and asks what makes the playworld activity sustainable in the class. The summary part provides a critical literature review on the concept of agency and argues that the inherently contradictory nature of the phenomenon of agency has not been sufficiently theorized. The summary part also locates the playworld intervention in a historical frame by discussing the changing conceptions of adulthood and childhood in the West. By focusing on the changing role of play and art in both adults’ and children’s contemporary lives, the thesis opens up an important but often neglected perspective on the problem of promoting student agency in education. The results illustrate how engaging in a collectively imagined and dramatized pretend play space together with the children enabled the teachers to momentarily put aside their “knower” positions in the classroom. The fictive roles and the narrative plot helped them to create a necessary incompleteness and open-endedness in the activity that stimulated the children’s initiatives. This meant that the children too could momentarily step out of their traditional classroom positions as pupils and initiate action to further the collective play. Engaging in this kind of unconventional activity and taking up and enacting agency was, however, very challenging for the participating children and teachers. It often contradicted the need to sustain control and order in the classroom. The study concludes that play- and drama-based pedagogies offer a unique but undeveloped potential for developing educational spaces that help teachers and children deal with the often contradictory requirements of schooling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Printing papers have been the main product of the Finnish paper industry. To improve properties and economy of printing papers, controlling of tracheid cross-sectional dimensions and wood viscoelasticity are examined in this study. Controlling is understood as any procedure which yields raw material classes with distinct properties and small internal variation. Tracheid cross-sectional dimensions, i.e., cell wall thickness and radial and tangential diameters can be controlled with methods such as sorting wood into pulpwood and sawmill chips, sorting of logs according to tree social status and fractionation of fibres. These control methods were analysed in this study with simulations, which were based on measured tracheid cross-sectional dimensions. A SilviScan device was used to measure the data set from five Norway spruce (Picea abies) and five Scots pine (Pinus sylvestris) trunks. The simulation results indicate that the sawmill chips and top pulpwood assortments have quite similar cross-sectional dimensions. Norway spruce and Scots pine are on average also relatively similar in their cross-sectional dimensions. The distributions of these species are somewhat different, but from a practical point of view, the differences are probably of minor importance. The controlling of tracheid cross-sectional dimensions can be done most efficiently with methods that can separate fibres into earlywood and latewood. Sorting of logs or partitioning of logs into juvenile and mature wood were markedly less efficient control methods than fractionation of fibres. Wood viscoelasticity affects energy consumption in mechanical pulping, and is thus an interesting control target when improving energy efficiency of the process. A literature study was made to evaluate the possibility of using viscoelasticity in controlling. The study indicates that there is considerable variation in viscoelastic properties within tree species, but unfortunately, the viscoelastic properties of important raw material lots such as top pulpwood or sawmill chips are not known. Viscoelastic properties of wood depend mainly on lignin, but also on microfibrillar angle, width of cellulose crystals and tracheid cross-sectional dimensions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The United States is the world s single biggest market area, where the demand for graphic papers has increased by 80 % during the last three decades. However, during the last two decades there have been very big unpredictable changes in the graphic paper markets. For example, the consumption of newsprint started to decline from the late 1980 s, which was surprising compared to the historical consumption and projections. The consumption has declined since. The aim of this study was to see how magazine paper consumption will develop in the United States until 2030. The long-term consumption projection was made using mainly two methods. The first method was to use trend analysis to see how and if the consumption has changed since 1980. The second method was to use qualitative estimate. These estimates are then compared to the so-called classical model projections, which are usually mentioned and used in forestry literature. The purpose of the qualitative analysis is to study magazine paper end-use purposes and to analyze how and with what intensity the changes in society will effect to magazine paper consumption in the long-term. The framework of this study covers theories such as technology adaptation, electronic substitution, electronic publishing and Porter s threat of substitution. Because this study deals with markets, which have showed signs of structural change, a very substantial part of this study covers recent development and newest possible studies and statistics. The following were among the key findings of this study. Different end-uses have very different kinds of future. Electronic substitution is very likely in some end-use purposes, but not in all. Young people i.e. future consumers have very different manners, habits and technological opportunities than our parents did. These will have substantial effects in magazine paper consumption in the long-term. This study concludes to the fact that the change in magazine paper consumption is more likely to be gradual (evolutionary) than sudden collapse (revolutionary). It is also probable that the years of fast growing consumption of magazine papers are behind. Besides the decelerated growth, the consumption of magazine papers will decline slowly in the long-term. The decline will be faster depending on how far in the future we ll extend the study to.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, a quality assessment method based on sampling of primary laser inventory units (microsegments) was analysed. The accuracy of a laser inventory carried out in Kuhmo was analysed as a case study. Field sample plots were measured on the sampled microsegments in the Kuhmo inventory area. Two main questions were considered. Did the ALS based inventory meet the accuracy requirements set for the provider and how should a reliable, cost-efficient and independent quality assessment be undertaken. The agreement between control measurement and ALS based inventory was analysed in four ways: 1) The root mean squared errors (RMSEs) and bias were calculated. 2) Scatter plots with 95% confidence intervals were plotted and the placing of identity lines was checked. 3) Bland-Altman plots were drawn so that the mean difference of attributes between the control method and ALS-method was calculated and plotted against average value of attributes. 4) The tolerance limits were defined and combined with Bland-Altman plots. The RMSE values were compared to a reference study from which the accuracy requirements had been set to the service provider. The accuracy requirements in Kuhmo were achieved, however comparison of RMSE values proved to be difficult. Field control measurements are costly and time-consuming, but they are considered to be robust. However, control measurements might include errors, which are difficult to take into account. Using the Bland-Altman plots none of the compared methods are considered to be completely exact, so this offers a fair way to interpret results of assessment. The tolerance limits to be set on order combined with Bland-Altman plots were suggested to be taken in practise. In addition, bias should be calculated for total area. Some other approaches for quality control were briefly examined. No method was found to fulfil all the required demands of statistical reliability, cost-efficiency, time efficiency, simplicity and speed of implementation. Some benefits and shortcomings of the studied methods were discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An overwhelming majority of all the research on soil phosphorus (P) has been carried out with soil samples taken from the surface soils only, and our understanding of the forms and the reactions of P at a soil profile scale is based on few observations. In Finland, the interest in studying the P in complete soil profiles has been particularly small because of the lack of tradition in studying soil genesis, morphology, or classification. In this thesis, the P reserves and the retention of orthophosphate phosphorus (PO4-P) were examined in four cultivated mineral soil profiles in Finland (three Inceptisols and one Spodosol). The soils were classified according to the U.S. Soil Taxonomy and soil samples were taken from the genetic horizons in the profiles. The samples were analyzed for total P concentration, Chang and Jackson P fractions, P sorption properties, concentrations of water-extractable P, and for concentrations of oxalate-extractable Al and Fe. Theoretical P sorption capacities and degrees of P saturation were calculated with the data from the oxalate-extractions and the P fractionations. The studied profiles can be divided into sections with clearly differing P characteristics by their master horizons Ap, B and C. The C (or transitional BC) horizons below an approximate depth of 70 cm were dominated by, assumingly apatitic, H2SO4-soluble P. The concentration of total P in the C horizons ranged from 729 to 810 mg kg-1. In the B horizons between the depths of 30 and 70 cm, a significant part of the primary acid-soluble P has been weathered and transformed to secondary P forms. A mean weathering rate of the primary P in the soils was estimated to vary between 230 and 290 g ha-1 year-1. The degrees of P saturation in the B and C horizons were smaller than 7%, and the solubility of PO4-P was negligible. The P conditions in the Ap horizons differed drastically from those in the subsurface horizons. The high concentrations of total P (689-1870 mg kg-1) in the Ap horizons are most likely attributable to long-term cultivation with positive P balances. A significant proportion of the P in the Ap horizons occurred in the NH4F- and NaOH-extractable forms and as organic P. These three P pools, together with the concentrations of oxalate-extractable Al and Fe, seem to control the dynamics of PO4-P in the soils. The degrees of P saturation in the Ap horizons were greater (8-36%) than in the subsurface horizons. This was also reflected in the sorption experiments: Only the Ap horizons were able to maintain elevated PO4-P concentrations in the solution phase − all the subsoil horizons acted as sinks for PO4-P. Most of the available sorption capacity in the soils is located in the B horizons. The results suggest that this capacity could be utilized in reducing the losses of soluble P from excessively fertilized soils by mixing highly sorptive material from the B horizons with the P-enriched surface soil. The drastic differences in the P characteristics observed between adjoining horizons have to be taken into consideration when conducting soil sampling. Sampling of subsoils has to be made according to the genetic horizons or at small depth increments. Otherwise, contrasting materials are likely to be mixed in the same sample; and the results of such samples are not representative of any material present in the studied profile. Air-drying of soil samples was found to alter the results of the sorption experiments and the water extractions. This indicates that the studies on the most labile P forms in soil should be carried out with moist samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent epidemiological studies have shown a consistent association of the mass concentration of urban air thoracic (PM10) and fine (PM2.5) particles with mortality and morbidity among cardiorespiratory patients. However, the chemical characteristics of different particulate size ranges and the biological mechanisms responsible for these adverse health effects are not well known. The principal aims of this thesis were to validate a high volume cascade impactor (HVCI) for the collection of particulate matter for physicochemical and toxicological studies, and to make an in-depth chemical and source characterisation of samples collected during different pollution situations. The particulate samples were collected with the HVCI, virtual impactors and a Berner low pressure impactor in six European cities: Helsinki, Duisburg, Prague, Amsterdam, Barcelona and Athens. The samples were analysed for particle mass, common ions, total and water-soluble elements as well as elemental and organic carbon. Laboratory calibration and field comparisons indicated that the HVCI can provide a unique large capacity, high efficiency sampling of size-segregated aerosol particles. The cutoff sizes of the recommended HVCI configuration were 2.4, 0.9 and 0.2 μm. The HVCI mass concentrations were in a good agreement with the reference methods, but the chemical composition of especially the fine particulate samples showed some differences. This implies that the chemical characterization of the exposure variable in toxicological studies needs to be done from the same HVCI samples as used in cell and animal studies. The data from parallel, low volume reference samplers provide valuable additional information for chemical mass closure and source assessment. The major components of PM2.5 in the virtual impactor samples were carbonaceous compounds, secondary inorganic ions and sea salt, whereas those of coarse particles (PM2.5-10) were soil-derived compounds, carbonaceous compounds, sea salt and nitrate. The major and minor components together accounted for 77-106% and 77-96% of the gravimetrically-measured masses of fine and coarse particles, respectively. Relatively large differences between sampling campaigns were observed in the organic carbon content of the PM2.5 samples as well as the mineral composition of the PM2.5-10 samples. A source assessment based on chemical tracers suggested clear differences in the dominant sources (e.g. traffic, residential heating with solid fuels, metal industry plants, regional or long-range transport) between the sampling campaigns. In summary, the field campaigns exhibited different profiles with regard to particulate sources, size distribution and chemical composition, thus, providing a highly useful setup for toxicological studies on the size-segregated HVCI samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increase in global temperature has been attributed to increased atmospheric concentrations of greenhouse gases (GHG), mainly that of CO2. The threat of severe and complex socio-economic and ecological implications of climate change have initiated an international process that aims to reduce emissions, to increase C sinks, and to protect existing C reservoirs. The famous Kyoto protocol is an offspring of this process. The Kyoto protocol and its accords state that signatory countries need to monitor their forest C pools, and to follow the guidelines set by the IPCC in the preparation, reporting and quality assessment of the C pool change estimates. The aims of this thesis were i) to estimate the changes in carbon stocks vegetation and soil in the forests in Finnish forests from 1922 to 2004, ii) to evaluate the applied methodology by using empirical data, iii) to assess the reliability of the estimates by means of uncertainty analysis, iv) to assess the effect of forest C sinks on the reliability of the entire national GHG inventory, and finally, v) to present an application of model-based stratification to a large-scale sampling design of soil C stock changes. The applied methodology builds on the forest inventory measured data (or modelled stand data), and uses statistical modelling to predict biomasses and litter productions, as well as a dynamic soil C model to predict the decomposition of litter. The mean vegetation C sink of Finnish forests from 1922 to 2004 was 3.3 Tg C a-1, and in soil was 0.7 Tg C a-1. Soil is slowly accumulating C as a consequence of increased growing stock and unsaturated soil C stocks in relation to current detritus input to soil that is higher than in the beginning of the period. Annual estimates of vegetation and soil C stock changes fluctuated considerably during the period, were frequently opposite (e.g. vegetation was a sink but soil was a source). The inclusion of vegetation sinks into the national GHG inventory of 2003 increased its uncertainty from between -4% and 9% to ± 19% (95% CI), and further inclusion of upland mineral soils increased it to ± 24%. The uncertainties of annual sinks can be reduced most efficiently by concentrating on the quality of the model input data. Despite the decreased precision of the national GHG inventory, the inclusion of uncertain sinks improves its accuracy due to the larger sectoral coverage of the inventory. If the national soil sink estimates were prepared by repeated soil sampling of model-stratified sample plots, the uncertainties would be accounted for in the stratum formation and sample allocation. Otherwise, the increases of sampling efficiency by stratification remain smaller. The highly variable and frequently opposite annual changes in ecosystem C pools imply the importance of full ecosystem C accounting. If forest C sink estimates will be used in practice average sink estimates seem a more reasonable basis than the annual estimates. This is due to the fact that annual forest sinks vary considerably and annual estimates are uncertain, and they have severe consequences for the reliability of the total national GHG balance. The estimation of average sinks should still be based on annual or even more frequent data due to the non-linear decomposition process that is influenced by the annual climate. The methodology used in this study to predict forest C sinks can be transferred to other countries with some modifications. The ultimate verification of sink estimates should be based on comparison to empirical data, in which case the model-based stratification presented in this study can serve to improve the efficiency of the sampling design.