697 resultados para työn kehittäminen


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The commodity plastics that are used in our everyday lives are based on polyolefin resins and they find wide variety of applications in several areas. Most of the production is carried out in catalyzed low pressure processes. As a consequence polymerization of ethene and α-olefins has been one of the focus areas for catalyst research both in industry and academia. Enormous amount of effort have been dedicated to fine tune the processes and to obtain better control of the polymerization and to produce tailored polymer structures The literature review of the thesis concentrates on the use of Group IV metal complexes as catalysts for polymerization of ethene and branched α-olefins. More precisely the review is focused on the use of complexes bearing [O,O] and [O,N] type ligands which have gained considerable interest. Effects of the ligand framework as well as mechanical and fluxional behaviour of the complexes are discussed. The experimental part consists mainly of development of new Group IV metal complexes bearing [O,O] and [O,N] ligands and their use as catalysts precursors in ethene polymerization. Part of the experimental work deals with usage of high-throughput techniques in tailoring properties of new polymer materials which are synthesized using Group IV complexes as catalysts. It is known that the by changing the steric and electronic properties of the ligand framework it is possible to fine tune the catalyst and to gain control over the polymerization reaction. This is why in this thesis the complex structures were designed so that the ligand frameworks could be fairly easily modified. All together 14 complexes were synthesised and used as catalysts in ethene polymerizations. It was found that the ligand framework did have an impact within the studied catalyst families. The activities of the catalysts were affected by the changes in complex structure and also effects on the produced polymers were observed: molecular weights and molecular weight distributions were depended on the used catalyst structure. Some catalysts also produced bi- or multi-modal polymers. During last decade high-throughput techniques developed in pharmaceutical industries have been adopted into polyolefin research in order to speed-up and optimize the catalyst candidates. These methods can now be regarded as established method suitable for both academia and industry alike. These high-throughput techniques were used in tailoring poly(4-methyl-1-pentene) polymers which were synthesized using Group IV metal complexes as catalysts. This work done in this thesis represents the first successful example where the high-throughput synthesis techniques are combined with high-throughput mechanical testing techniques to speed-up the discovery process for new polymer materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main purpose of the research was to illustrate chemistry matriculation examination questions as a summative assessment tool, and represent how the questions have evolved over the years. Summative assessment and its various test item classifications, Finnish goal-oriented curriculum model, and Bloom’s Revised Taxonomy of Cognitive Objectives formed the theoretical framework for the research. The research data consisted of 257 chemistry questions from 28 matriculation examinations between 1996 and 2009. The analysed test questions were formulated according to the national upper secondary school chemistry curricula 1994, and 2003. Qualitative approach and theory-driven content analysis method were employed in the research. Peer review was used to guarantee the reliability of the results. The research was guided by the following questions: (a) What kinds of test item formats are used in chemistry matriculation examinations? (b) How the fundamentals of chemistry are included in the chemistry matriculation examination questions? (c) What kinds of cognitive knowledge and skills do the chemistry matriculation examination questions require? The research indicates that summative assessment was used diversely in chemistry matriculation examinations. The tests included various test item formats, and their combinations. The majority of the test questions were constructed-response items that were either verbal, quantitative, or experimental questions, symbol questions, or combinations of the aforementioned. The studied chemistry matriculation examinations seldom included selected-response items that can be either multiple-choice, alternate choice, or matching items. The relative emphasis of the test item formats differed slightly depending on whether the test was a part of an extensive general studies battery of tests in sciences and humanities, or a subject-specific test. The classification framework developed in the research can be applied in chemistry and science education, and also in educational research. Chemistry matriculation examinations are based on the goal-oriented curriculum model, and cover relatively well the fundamentals of chemistry included in the national curriculum. Most of the test questions related to the symbolism of chemical equation, inorganic and organic reaction types and applications, the bonding and spatial structure in organic compounds, and stoichiometry problems. Only a few questions related to electrolysis, polymers, or buffer solutions. None of the test questions related to composites. There were not any significant differences in the emphasis between the tests formulated according to the national curriculum 1994 or 2003. Chemistry matriculation examinations are cognitively demanding. The research shows that the majority of the test questions require higher-order cognitive skills. Most of the questions required analysis of procedural knowledge. The questions that only required remembering or processing metacognitive knowledge, were not included in the research data. The required knowledge and skill level varied slightly between the test questions in the extensive general studies battery of tests in sciences and humanities, and subject-specific tests administered since 2006. The proportion of the Finnish chemistry matriculation examination questions requiring higher-order cognitive knowledge and skills is very large compared to what is discussed in the research literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

NMR spectroscopy enables the study of biomolecules from peptides and carbohydrates to proteins at atomic resolution. The technique uniquely allows for structure determination of molecules in solution-state. It also gives insights into dynamics and intermolecular interactions important for determining biological function. Detailed molecular information is entangled in the nuclear spin states. The information can be extracted by pulse sequences designed to measure the desired molecular parameters. Advancement of pulse sequence methodology therefore plays a key role in the development of biomolecular NMR spectroscopy. A range of novel pulse sequences for solution-state NMR spectroscopy are presented in this thesis. The pulse sequences are described in relation to the molecular information they provide. The pulse sequence experiments represent several advances in NMR spectroscopy with particular emphasis on applications for proteins. Some of the novel methods are focusing on methyl-containing amino acids which are pivotal for structure determination. Methyl-specific assignment schemes are introduced for increasing the size range of 13C,15N labeled proteins amenable to structure determination without resolving to more elaborate labeling schemes. Furthermore, cost-effective means are presented for monitoring amide and methyl correlations simultaneously. Residual dipolar couplings can be applied for structure refinement as well as for studying dynamics. Accurate methods for measuring residual dipolar couplings in small proteins are devised along with special techniques applicable when proteins require high pH or high temperature solvent conditions. Finally, a new technique is demonstrated to diminish strong-coupling induced artifacts in HMBC, a routine experiment for establishing long-range correlations in unlabeled molecules. The presented experiments facilitate structural studies of biomolecules by NMR spectroscopy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, the kinetics of several alkyl, halogenated alkyl, and alkenyl free radical reactions with NO2, O2, Cl2, and HCl reactants were studied over a wide temperature range in time resolved conditions. Laser photolysis photoionisation mass spectrometer coupled to a flow reactor was the experimental method employed and this thesis present the first measurements performed with the experimental system constructed. During this thesis a great amount of work was devoted to the designing, building, testing, and improving the experimental apparatus. Carbon-centred free radicals were generated by the pulsed 193 or 248 nm photolysis of suitable precursors along the tubular reactor. The kinetics was studied under pseudo-first-order conditions using either He or N2 buffer gas. The temperature and pressure ranges employed were between 190 and 500 K, and 0.5 45 torr, respectively. The possible role of heterogeneous wall reactions was investigated employing reactor tubes with different sizes, i.e. to significantly vary the surface to volume ratio. In this thesis, significant new contributions to the kinetics of carbon-centred free radical reactions with nitrogen dioxide were obtained. Altogether eight substituted alkyl (CH2Cl, CHCl2, CCl3, CH2I, CH2Br, CHBr2, CHBrCl, and CHBrCH3) and two alkenyl (C2H3, C3H3) free radical reactions with NO2 were investigated as a function of temperature. The bimolecular rate coefficients of all these reactions were observed to possess negative temperature dependencies, while pressure dependencies were not noticed for any of these reactions. Halogen substitution was observed to moderately reduce the reactivity of substituted alkyl radicals in the reaction with NO2, while the resonance stabilisation of the alkenyl radical lowers its reactivity with respect to NO2 only slightly. Two reactions relevant to atmospheric chemistry, CH2Br + O2 and CH2I + O2, were also investigated. It was noticed that while CH2Br + O2 reaction shows pronounced pressure dependence, characteristic of peroxy radical formation, no such dependence was observed for the CH2I + O2 reaction. Observed primary products of the CH2I + O2 reaction were the I-atom and the IO radical. Kinetics of CH3 + HCl, CD3 + HCl, CH3 + DCl, and CD3 + DCl reactions were also studied. While all these reactions possess positive activation energies, in contrast to the other systems investigated in this thesis, the CH3 + HCl and CD3 + HCl reactions show a non-linear temperature dependency on the Arrhenius plot. The reactivity of substituted methyl radicals toward NO2 was observed to increase with decreasing electron affinity of the radical. The same trend was observed for the reactions of substituted methyl radicals with Cl2. It is proposed that interactions of frontier orbitals are responsible to these observations and Frontier Orbital Theory could be used to explain the observed reactivity trends of these highly exothermic reactions having reactant-like transition states.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of a simple method of coating a semi-permanent phospholipid layer onto a capillary for electrochromatography use was the focus of this study. The work involved finding good coating conditions, stabilizing the phospholipid coating, and examining the effect of adding divalent cations, cetyltrimethylammonium bromide, and polyethylene glycol (PEG)-lipids on the stability of the coating. Since a further purpose was to move toward more biological membrane coatings, the capillaries were also coated with cholesterol-containing liposomes and liposomes of red blood cell ghost lipids. Liposomes were prepared by extrusion, and large unilamellar vesicles with a diameter of about 100 nm were obtained. Zwitterionic phosphatidylcholine (PC) was used as a basic component, mainly 1-palmitoyl-2-oleyl-sn-glycero-3-phosphocholine (POPC) but also eggPC and 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC). Different amounts of sphingomyelin, bovine brain phosphatidylserine, and cholesterol were added to the PC. The stability of the coating in 40 mM N-(2-hydroxyethyl)piperazine-N’-(2-ethanesulfonic acid) (HEPES) solution at pH 7.4 was studied by measuring the electroosmotic flow and by separating neutral steroids, basic proteins, and low-molar-mass drugs. The presence of PC in the coating solution was found to be essential to achieving a coating. The stability of the coating was improved by the addition of negative phosphatidylserine, cholesterol, divalent cations, or PEGylated lipids, and by working in the gel-state region of the phospholipid. Study of the effect on the PC coating of divalent metal ions calcium, magnesium, and zinc showed a molar ratio of 1:3 PC/Ca2+ or PC/Mg2+ to give increased rigidity to the membrane and the best coating stability. The PEGylated lipids used in the study were sterically stabilized commercial lipids with covalently attached PEG chains. The vesicle size generally decreased when PEGylated lipids of higher molar mass were present in the vesicle. The predominance of discoidal micelles over liposomes increased PEG chain length and the average size of the vesicles thus decreased. In the capillary electrophoresis (CE) measurements a highly stable electroosmotic flow was achieved with 20% PEGylated lipid in the POPC coating dispersion, the best results being obtained for disteroyl PEG (3000) conjugates. The results suggest that smaller particles (discoidal micelles) result in tighter packing and better shielding of silanol groups on the silica wall. The effect of temperature on the coating stability was investigated by using DPPC liposomes at temperatures above (45 C) and below (25 C) the main phase transition temperature. Better results were obtained with DPPC in the more rigid gel state than in the fluid state: the electroosmotic flow was heavily suppressed and the PC coating was stabilized. Also dispersions of DPPC with 0−30 mol% of cholesterol and sphingomyelin in different ratios, which more closely resemble natural membranes, resulted in stable coatings. Finally, the CE measurements revealed that a stable coating is formed when capillaries are coated with liposomes of red blood cell ghost lipids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi- and intralake datasets of fossil midge assemblages in surface sediments of small shallow lakes in Finland were studied to determine the most important environmental factors explaining trends in midge distribution and abundance. The aim was to develop palaeoenvironmental calibration models for the most important environmental variables for the purpose of reconstructing past environmental conditions. The developed models were applied to three high-resolution fossil midge stratigraphies from southern and eastern Finland to interpret environmental variability over the past 2000 years, with special focus on the Medieval Climate Anomaly (MCA), the Little Ice Age (LIA) and recent anthropogenic changes. The midge-based results were compared with physical properties of the sediment, historical evidence and environmental reconstructions based on diatoms (Bacillariophyta), cladocerans (Crustacea: Cladocera) and tree rings. The results showed that the most important environmental factor controlling midge distribution and abundance along a latitudinal gradient in Finland was the mean July air temperature (TJul). However, when the dataset was environmentally screened to include only pristine lakes, water depth at the sampling site became more important. Furthermore, when the dataset was geographically scaled to southern Finland, hypolimnetic oxygen conditions became the dominant environmental factor. The results from an intralake dataset from eastern Finland showed that the most important environmental factors controlling midge distribution within a lake basin were river contribution, water depth and submerged vegetation patterns. In addition, the results of the intralake dataset showed that the fossil midge assemblages represent fauna that lived in close proximity to the sampling sites, thus enabling the exploration of within-lake gradients in midge assemblages. Importantly, this within-lake heterogeneity in midge assemblages may have effects on midge-based temperature estimations, because samples taken from the deepest point of a lake basin may infer considerably colder temperatures than expected, as shown by the present test results. Therefore, it is suggested here that the samples in fossil midge studies involving shallow boreal lakes should be taken from the sublittoral, where the assemblages are most representative of the whole lake fauna. Transfer functions between midge assemblages and the environmental forcing factors that were significantly related with the assemblages, including mean air TJul, water depth, hypolimnetic oxygen, stream flow and distance to littoral vegetation, were developed using weighted averaging (WA) and weighted averaging-partial least squares (WA-PLS) techniques, which outperformed all the other tested numerical approaches. Application of the models in downcore studies showed mostly consistent trends. Based on the present results, which agreed with previous studies and historical evidence, the Medieval Climate Anomaly between ca. 800 and 1300 AD in eastern Finland was characterized by warm temperature conditions and dry summers, but probably humid winters. The Little Ice Age (LIA) prevailed in southern Finland from ca. 1550 to 1850 AD, with the coldest conditions occurring at ca. 1700 AD, whereas in eastern Finland the cold conditions prevailed over a longer time period, from ca. 1300 until 1900 AD. The recent climatic warming was clearly represented in all of the temperature reconstructions. In the terms of long-term climatology, the present results provide support for the concept that the North Atlantic Oscillation (NAO) index has a positive correlation with winter precipitation and annual temperature and a negative correlation with summer precipitation in eastern Finland. In general, the results indicate a relatively warm climate with dry summers but snowy winters during the MCA and a cool climate with rainy summers and dry winters during the LIA. The results of the present reconstructions and the forthcoming applications of the models can be used in assessments of long-term environmental dynamics to refine the understanding of past environmental reference conditions and natural variability required by environmental scientists, ecologists and policy makers to make decisions concerning the presently occurring global, regional and local changes. The developed midge-based models for temperature, hypolimnetic oxygen, water depth, littoral vegetation shift and stream flow, presented in this thesis, are open for scientific use on request.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most countries of Europe, as well as many countries in other parts of the world, are experiencing an increased impact of natural hazards. It is often speculated, but not yet proven, that climate change might influence the frequency and magnitude of certain hydro-meteorological natural hazards. What has certainly been observed is a sharp increase in financial losses caused by natural hazards worldwide. Eventhough Europe appears to be a space that is not affected by natural hazards to such catastrophic extents as other parts of the world are, the damages experienced here are certainly increasing too. Natural hazards, climate change and, in particular, risks have therefore recently been put high on the political agenda of the EU. In the search for appropriate instruments for mitigating impacts of natural hazards and climate change, as well as risks, the integration of these factors into spatial planning practices is constantly receiving higher attention. The focus of most approaches lies on single hazards and climate change mitigation strategies. The current paradigm shift of climate change mitigation to adaptation is used as a basis to draw conclusions and recommendations on what concepts could be further incorporated into spatial planning practices. Especially multi-hazard approaches are discussed as an important approach that should be developed further. One focal point is the definition and applicability of the terms natural hazard, vulnerability and risk in spatial planning practices. Especially vulnerability and risk concepts are so many-fold and complicated that their application in spatial planning has to be analysed most carefully. The PhD thesis is based on six published articles that describe the results of European research projects, which have elaborated strategies and tools for integrated communication and assessment practices on natural hazards and climate change impacts. The papers describe approaches on local, regional and European level, both from theoretical and practical perspectives. Based on these, passed, current and future potential spatial planning applications are reviewed and discussed. In conclusion it is recommended to shift from single hazard assessments to multi-hazard approaches, integrating potential climate change impacts. Vulnerability concepts should play a stronger role than present, and adaptation to natural hazards and climate change should be more emphasized in relation to mitigation. It is outlined that the integration of risk concepts in planning is rather complicated and would need very careful assessment to ensure applicability. Future spatial planning practices should also consider to be more interdisciplinary, i.e. to integrate as many stakeholders and experts as possible to ensure the sustainability of investments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Precipitation-induced runoff and leaching from milled peat mining mires by peat types: a comparative method for estimating the loading of water bodies during peat production. This research project in environmental geology has arisen out of an observed need to be able to predict more accurately the loading of watercourses with detrimental organic substances and nutrients from already existing and planned peat production areas, since the authorities capacity for insisting on such predictions covering the whole duration of peat production in connection with evaluations of environmental impact is at present highly limited. National and international decisions regarding monitoring of the condition of watercourses and their improvement and restoration require more sophisticated evaluation methods in order to be able to forecast watercourse loading and its environmental impacts at the stage of land-use planning and preparations for peat production.The present project thus set out from the premise that it would be possible on the basis of existing mire and peat data properties to construct estimates for the typical loading from production mires over the whole duration of their exploitation. Finland has some 10 million hectares of peatland, accounting for almost a third of its total area. Macroclimatic conditions have varied in the course of the Holocene growth and development of this peatland, and with them the habitats of the peat-forming plants. Temperatures and moisture conditions have played a significant role in determining the dominant species of mire plants growing there at any particular time, the resulting mire types and the accumulation and deposition of plant remains to form the peat. The above climatic, environmental and mire development factors, together with ditching, have contributed, and continue to contribute, to the existence of peat horizons that differ in their physical and chemical properties, leading to differences in material transport between peatlands in a natural state and mires that have been ditched or prepared for forestry and peat production. Watercourse loading from the ditching of mires or their use for peat production can have detrimental effects on river and lake environments and their recreational use, especially where oxygen-consuming organic solids and soluble organic substances and nutrients are concerned. It has not previously been possible, however, to estimate in advance the watercourse loading likely to arise from ditching and peat production on the basis of the characteristics of the peat in a mire, although earlier observations have indicated that watercourse loading from peat production can vary greatly and it has been suggested that differences in peat properties may be of significance in this. Sprinkling is used here in combination with simulations of conditions in a milled peat production area to determine the influence of the physical and chemical properties of milled peats in production mires on surface runoff into the drainage ditches and the concentrations of material in the runoff water. Sprinkling and extraction experiments were carried out on 25 samples of milled Carex (C) and Sphagnum (S) peat of humification grades H 2.5 8.5 with moisture content in the range 23.4 89% on commencement of the first sprinkling, which was followed by a second sprinkling 24 hours later. The water retention capacity of the peat was best, and surface runoff lowest, with Sphagnum and Carex peat samples of humification grades H 2.5 6 in the moisture content class 56 75%. On account of the hydrophobicity of dry peat, runoff increased in a fairly regular manner with drying of the sample from 55% to 24 30%. Runoff from the samples with an original moisture content over 55% increased by 63% in the second round of sprinkling relative to the first, as they had practically reached saturation point on the first occasion, while those with an original moisture content below 55% retained their high runoff in the second round, due to continued hydrophobicity. The well-humified samples (H 6.5 8.5) with a moisture content over 80% showed a low water retention capacity and high runoff in both rounds of sprinkling. Loading of the runoff water with suspended solids, total phosphorus and total nitrogen, and also the chemical oxygen demand (CODMn O2), varied greatly in the sprinkling experiment, depending on the peat type and degree of humification, but concentrations of the same substances in the two sprinklings were closely or moderately closely correlated and these correlations were significant. The concentrations of suspended solids in the runoff water observed in the simulations of a peat production area and the direct surface runoff from it into the drainage ditch system in response to rain (sprinkling intensity 1.27 mm/min) varied c. 60-fold between the degrees of humification in the case of the Carex peats and c. 150-fold for the Sphagnum peats, while chemical oxygen demand varied c. 30-fold and c. 50-fold, respectively, total phosphorus c. 60-fold and c. 66-fold, total nitrogen c. 65-fold and c. 195-fold and ammonium nitrogen c. 90-fold and c. 30-fold. The increases in concentrations in the runoff water were very closely correlated with increases in humification of the peat. The correlations of the concentrations measured in extraction experiments (48 h) with peat type and degree of humification corresponded to those observed in the sprinkler experiments. The resulting figures for the surface runoff from a peat production area into the drainage ditches simulated by means of sprinkling and material concentrations in the runoff water were combined with statistics on the mean extent of daily rainfall (0 67 mm) during the frost-free period of the year (May October) over an observation period of 30 years to yield typical annual loading figures (kg/ha) for suspended solids (SS), chemical oxygen demand of organic matter (CODmn O2), total phosphorus (tot. P) and total nitrogen (tot. N) entering the ditches with respect to milled Carex (C) and Sphagnum (S) peats of humification grades H 2.5 8.5. In order to calculate the loading of drainage ditches from a milled peat production mire with the aid of these annual comparative values (in kg/ha), information is required on the properties of the intended production mire and its peat. Once data are available on the area of the mire, its peat depth, peat types and their degrees of humification, dry matter content, calorific value and corresponding energy content, it is possible to produce mutually comparable estimates for individual mires with respect to the annual loading of the drainage ditch system and the surrounding watercourse for the whole service life of the production area, the duration of this service life, determinations of energy content and the amount of loading per unit of energy generated (kg/MWh). In the 8 mires in the Köyhäjoki basin, Central Ostrobothnia, taken as an example, the loading of suspended solids (SS) in the drainage ditch networks calculated on the basis of the typical values obtained here and existing mire and peat data and expressed per unit of energy generated varied between the mires and horizons in the range 0.9 16.5 kg/MWh. One of the aims of this work was to develop means of making better use of existing mire and peat data and the results of corings and other field investigations. In this respect combination of the typical loading values (kg/ha) obtained here for S, SC, CS and C peats and the various degrees of humification (H 2.5 8.5) with the above mire and peat data by means of a computer program for the acquisition and handling of such data would enable all the information currently available and that deposited in the system in the future to be used for defining watercourse loading estimates for mires and comparing them with the corresponding estimates of energy content. The intention behind this work has been to respond to the challenge facing the energy generation industry to find larger peat production areas that exert less loading on the environment and to that facing the environmental authorities to improve the means available for estimating watercourse loading from peat production and its environmental impacts in advance. The results conform well to the initial hypothesis and to the goals laid down for the research and should enable watercourse loading from existing and planned peat production to be evaluated better in the future and the resulting impacts to be taken into account when planning land use and energy generation. The advance loading information available in this way would be of value in the selection of individual peat production areas, the planning of their exploitation, the introduction of water protection measures and the planning of loading inspections, in order to achieve controlled peat production that pays due attention to environmental considerations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new rock mass classification scheme, the Host Rock Classification system (HRC-system) has been developed for evaluating the suitability of volumes of rock mass for the disposal of high-level nuclear waste in Precambrian crystalline bedrock. To support the development of the system, the requirements of host rock to be used for disposal have been studied in detail and the significance of the various rock mass properties have been examined. The HRC-system considers both the long-term safety of the repository and the constructability in the rock mass. The system is specific to the KBS-3V disposal concept and can be used only at sites that have been evaluated to be suitable at the site scale. By using the HRC-system, it is possible to identify potentially suitable volumes within the site at several different scales (repository, tunnel and canister scales). The selection of the classification parameters to be included in the HRC-system is based on an extensive study on the rock mass properties and their various influences on the long-term safety, the constructability and the layout and location of the repository. The parameters proposed for the classification at the repository scale include fracture zones, strength/stress ratio, hydraulic conductivity and the Groundwater Chemistry Index. The parameters proposed for the classification at the tunnel scale include hydraulic conductivity, Q´ and fracture zones and the parameters proposed for the classification at the canister scale include hydraulic conductivity, Q´, fracture zones, fracture width (aperture + filling) and fracture trace length. The parameter values will be used to determine the suitability classes for the volumes of rock to be classified. The HRC-system includes four suitability classes at the repository and tunnel scales and three suitability classes at the canister scale and the classification process is linked to several important decisions regarding the location and acceptability of many components of the repository at all three scales. The HRC-system is, thereby, one possible design tool that aids in locating the different repository components into volumes of host rock that are more suitable than others and that are considered to fulfil the fundamental requirements set for the repository host rock. The generic HRC-system, which is the main result of this work, is also adjusted to the site-specific properties of the Olkiluoto site in Finland and the classification procedure is demonstrated by a test classification using data from Olkiluoto. Keywords: host rock, classification, HRC-system, nuclear waste disposal, long-term safety, constructability, KBS-3V, crystalline bedrock, Olkiluoto

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Finland one of the most important current issues in the environmental management is the quality of surface waters. The increasing social importance of lakes and water systems has generated wide-ranging interest in lake restoration and management, concerning especially lakes suffering from eutrophication, but also from other environmental impacts. Most of the factors deteriorating the water quality in Finnish lakes are connected to human activities. Especially since the 1940's, the intensified farming practices and conduction of sewage waters from scattered settlements, cottages and industry have affected the lakes, which simultaneously have developed in to recreational areas for a growing number of people. Therefore, this study was focused on small lakes, which are human impacted, located close to settlement areas and have a significant value for local population. The aim of this thesis was to obtain information from lake sediment records for on-going lake restoration activities and to prove that a well planned, properly focused lake sediment study is an essential part of the work related to evaluation, target consideration and restoration of Finnish lakes. Altogether 11 lakes were studied. The study of Lake Kaljasjärvi was related to the gradual eutrophication of the lake. In lakes Ormajärvi, Suolijärvi, Lehee, Pyhäjärvi and Iso-Roine the main focus was on sediment mapping, as well as on the long term changes of the sedimentation, which were compared to Lake Pääjärvi. In Lake Hormajärvi the role of different kind of sedimentation environments in the eutrophication development of the lake's two basins were compared. Lake Orijärvi has not been eutrophied, but the ore exploitation and related acid main drainage from the catchment area have influenced the lake drastically and the changes caused by metal load were investigated. The twin lakes Etujärvi and Takajärvi are slightly eutrophied, but also suffer problems associated with the erosion of the substantial peat accumulations covering the fringe areas of the lakes. These peat accumulations are related to Holocene water level changes, which were investigated. The methods used were chosen case-specifically for each lake. In general, acoustic soundings of the lakes, detailed description of the nature of the sediment and determinations of the physical properties of the sediment, such as water content, loss on ignition and magnetic susceptibility were used, as was grain size analysis. A wide set of chemical analyses was also used. Diatom and chrysophycean cyst analyses were applied, and the diatom inferred total phosphorus content was reconstructed. The results of these studies prove, that the ideal lake sediment study, as a part of a lake management project, should be two-phased. In the first phase, thoroughgoing mapping of sedimentation patterns should be carried out by soundings and adequate corings. The actual sampling, based on the preliminary results, must include at least one long core from the main sedimentation basin for the determining the natural background state of the lake. The recent, artificially impacted development of the lake can then be determined by short-core and surface sediment studies. The sampling must be focused on the basis of the sediment mapping again, and it should represent all different sedimentation environments and bottom dynamic zones, considering the inlets and outlets, as well as the effects of possible point loaders of the lake. In practice, the budget of the lake management projects of is usually limited and only the most essential work and analyses can be carried out. The set of chemical and biological analyses and dating methods must therefore been thoroughly considered and adapted to the specific management problem. The results show also, that information obtained from a properly performed sediment study enhances the planning of the restoration, makes possible to define the target of the remediation activities and improves the cost-efficiency of the project.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The monograph dissertation deals with kernel integral operators and their mapping properties on Euclidean domains. The associated kernels are weakly singular and examples of such are given by Green functions of certain elliptic partial differential equations. It is well known that mapping properties of the corresponding Green operators can be used to deduce a priori estimates for the solutions of these equations. In the dissertation, natural size- and cancellation conditions are quantified for kernels defined in domains. These kernels induce integral operators which are then composed with any partial differential operator of prescribed order, depending on the size of the kernel. The main object of study in this dissertation being the boundedness properties of such compositions, the main result is the characterization of their Lp-boundedness on suitably regular domains. In case the aforementioned kernels are defined in the whole Euclidean space, their partial derivatives of prescribed order turn out to be so called standard kernels that arise in connection with singular integral operators. The Lp-boundedness of singular integrals is characterized by the T1 theorem, which is originally due to David and Journé and was published in 1984 (Ann. of Math. 120). The main result in the dissertation can be interpreted as a T1 theorem for weakly singular integral operators. The dissertation deals also with special convolution type weakly singular integral operators that are defined on Euclidean spaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The focus of this study is on statistical analysis of categorical responses, where the response values are dependent of each other. The most typical example of this kind of dependence is when repeated responses have been obtained from the same study unit. For example, in Paper I, the response of interest is the pneumococcal nasopharengyal carriage (yes/no) on 329 children. For each child, the carriage is measured nine times during the first 18 months of life, and thus repeated respones on each child cannot be assumed independent of each other. In the case of the above example, the interest typically lies in the carriage prevalence, and whether different risk factors affect the prevalence. Regression analysis is the established method for studying the effects of risk factors. In order to make correct inferences from the regression model, the associations between repeated responses need to be taken into account. The analysis of repeated categorical responses typically focus on regression modelling. However, further insights can also be gained by investigating the structure of the association. The central theme in this study is on the development of joint regression and association models. The analysis of repeated, or otherwise clustered, categorical responses is computationally difficult. Likelihood-based inference is often feasible only when the number of repeated responses for each study unit is small. In Paper IV, an algorithm is presented, which substantially facilitates maximum likelihood fitting, especially when the number of repeated responses increase. In addition, a notable result arising from this work is the freely available software for likelihood-based estimation of clustered categorical responses.