30 resultados para further steps


Relevância:

20.00% 20.00%

Publicador:

Resumo:

New stars form in dense interstellar clouds of gas and dust called molecular clouds. The actual sites where the process of star formation takes place are the dense clumps and cores deeply embedded in molecular clouds. The details of the star formation process are complex and not completely understood. Thus, determining the physical and chemical properties of molecular cloud cores is necessary for a better understanding of how stars are formed. Some of the main features of the origin of low-mass stars, like the Sun, are already relatively well-known, though many details of the process are still under debate. The mechanism through which high-mass stars form, on the other hand, is poorly understood. Although it is likely that the formation of high-mass stars shares many properties similar to those of low-mass stars, the very first steps of the evolutionary sequence are unclear. Observational studies of star formation are carried out particularly at infrared, submillimetre, millimetre, and radio wavelengths. Much of our knowledge about the early stages of star formation in our Milky Way galaxy is obtained through molecular spectral line and dust continuum observations. The continuum emission of cold dust is one of the best tracers of the column density of molecular hydrogen, the main constituent of molecular clouds. Consequently, dust continuum observations provide a powerful tool to map large portions across molecular clouds, and to identify the dense star-forming sites within them. Molecular line observations, on the other hand, provide information on the gas kinematics and temperature. Together, these two observational tools provide an efficient way to study the dense interstellar gas and the associated dust that form new stars. The properties of highly obscured young stars can be further examined through radio continuum observations at centimetre wavelengths. For example, radio continuum emission carries useful information on conditions in the protostar+disk interaction region where protostellar jets are launched. In this PhD thesis, we study the physical and chemical properties of dense clumps and cores in both low- and high-mass star-forming regions. The sources are mainly studied in a statistical sense, but also in more detail. In this way, we are able to examine the general characteristics of the early stages of star formation, cloud properties on large scales (such as fragmentation), and some of the initial conditions of the collapse process that leads to the formation of a star. The studies presented in this thesis are mainly based on molecular line and dust continuum observations. These are combined with archival observations at infrared wavelengths in order to study the protostellar content of the cloud cores. In addition, centimetre radio continuum emission from young stellar objects (YSOs; i.e., protostars and pre-main sequence stars) is studied in this thesis to determine their evolutionary stages. The main results of this thesis are as follows: i) filamentary and sheet-like molecular cloud structures, such as infrared dark clouds (IRDCs), are likely to be caused by supersonic turbulence but their fragmentation at the scale of cores could be due to gravo-thermal instability; ii) the core evolution in the Orion B9 star-forming region appears to be dynamic and the role played by slow ambipolar diffusion in the formation and collapse of the cores may not be significant; iii) the study of the R CrA star-forming region suggests that the centimetre radio emission properties of a YSO are likely to change with its evolutionary stage; iv) the IRDC G304.74+01.32 contains candidate high-mass starless cores which may represent the very first steps of high-mass star and star cluster formation; v) SiO outflow signatures are seen in several high-mass star-forming regions which suggest that high-mass stars form in a similar way as their low-mass counterparts, i.e., via disk accretion. The results presented in this thesis provide constraints on the initial conditions and early stages of both low- and high-mass star formation. In particular, this thesis presents several observational results on the early stages of clustered star formation, which is the dominant mode of star formation in our Galaxy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis proposes that national or ethnic identity is an important and overlooked resource in conflict resolution. Usually ethnic identity is seen both in international relations and in social psychology as something that fuels the conflict. Using grounded theory to analyze data from interactive problem-solving workshops between Palestinians and Israelis a theory about the role of national identity in turning conflict into protracted conflict is developed. Drawing upon research from, among others, social identity theory, just world theory and prejudice it is argued that national identity is a prime candidate to provide the justification of a conflict party’s goals and the dehumanization of the other necessary to make a conflict protracted. It is not the nature of national identity itself that lets it perform this role but rather the ability to mobilize a constituency for social action (see Stürmer, Simon, Loewy, & Jörger, 2003). Reicher & Hopkins (1996) have demonstrated that national identity is constructed by political entrepreneurs to further their cause, even if this construction is not a conscious one. Data from interactive problem-solving workshops suggest that the possibility of conflict resolution is actually seen by participants as a direct threat of annihilation. Understanding the investment necessary to make conflict protracted this reaction seems plausible. The justification for ones actions provided by national identity makes the conflict an integral part of a conflict party’s identity. Conflict resolution, it is argued, is therefore a threat to the very core of the current national identity. This may explain why so many peace agreements have failed to provide the hoped for resolution of conflict. But if national identity is being used in a constructionist way to attain political goals, a political project of conflict resolution, if it is conscious of the constructionist process, needs to develop a national identity that is independent of conflict and therefore able to accommodate conflict resolution. From this understanding it becomes clear why national identity needs to change, i.e. be disarmed, if conflict resolution is to be successful. This process of disarmament is theorized to be similar to the process of creating and sustaining protracted conflict. What shape and function this change should have is explored from the understanding of the role of national identity in supporting conflict. Ideas how track-two diplomacy efforts, such as the interactive problem-solving workshop, could integrate a process by both conflict parties to disarm their respective identities are developed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hantaviruses, members of the genus Hantavirus in the Bunyaviridae family, are enveloped single-stranded RNA viruses with tri-segmented genome of negative polarity. In humans, hantaviruses cause two diseases, hemorrhagic fever with renal syndrome (HFRS) and hantavirus pulmonary syndrome (HPS), which vary in severity depending on the causative agent. Each hantavirus is carried by a specific rodent host and is transmitted to humans through excreta of infected rodents. The genome of hantaviruses encodes four structural proteins: the nucleocapsid protein (N), the glycoproteins (Gn and Gc), and the polymerase (L) and also the nonstructural protein (NSs). This thesis deals with the functional characterization of hantavirus N protein with regard to its structure. Structural studies of the N protein have progressed slowly and the crystal structure of the whole protein is still not available, therefore biochemical assays coupled with bioinformatical modeling proved essential for studying N protein structure and functions. Presumably, during RNA encapsidation, the N protein first forms intermediate trimers and then oligomers. First, we investigated the role of N-terminal domain in the N protein oligomerization. The results suggested that the N-terminal region of the N protein forms a coiled-coil, in which two antiparallel alpha helices interact via their hydrophobic seams. Hydrophobic residues L4, I11, L18, L25 and V32 in the first helix and L44, V51, L58 and L65 in the second helix were crucial for stabilizing the structure. The results were consistent with the head-to-head, tail-to-tail model for hantavirus N protein trimerization. We demonstrated that an intact coiled-coil structure of the N terminus is crucial for the oligomerization capacity of the N protein. We also added new details to the head-to-head, tail-to-tail model of trimerization by suggesting that the initial step is based on interaction(s) between intact intra-molecular coiled-coils of the monomers. We further analyzed the importance of charged aa residues located within the coiled-coil for the N protein oligomerization. To predict the interacting surfaces of the monomers we used an upgraded in silico model of the coiled-coil domain that was docked into a trimer. Next the predicted target residues were mutated. The results obtained using the mammalian two-hybrid assay suggested that conserved charged aa residues within the coiled-coil make a substantial contribution to the N protein oligomerization. This contribution probably involves the formation of interacting surfaces of the N monomers and also stabilization of the coiled-coil via intramolecular ionic bridging. We proposed that the tips of the coiled-coils are the first to come into direct contact and thus initiate tight packing of the three monomers into a compact structure. This was in agreement with the previous results showing that an increase in ionic strength abolished the interaction between N protein molecules. We also showed that residues having the strongest effect on the N protein oligomerization are not scattered randomly throughout the coiled-coil 3D model structure, but form clusters. Next we found evidence for the hantaviral N protein interaction with the cytoplasmic tail of the glycoprotein Gn. In order to study this interaction we used the GST pull-down assay in combination with mutagenesis technique. The results demonstrated that intact, properly folded zinc fingers of the Gn protein cytoplasmic tail as well as the middle domain of the N protein (that includes aa residues 80 248 and supposedly carries the RNA-binding domain) are essential for the interaction. Since hantaviruses do not have a matrix protein that mediates the packaging of the viral RNA in other negatve stranded viruses (NSRV), hantaviral RNPs should be involved in a direct interaction with the intraviral domains of the envelope-embedded glycoproteins. By showing the N-Gn interaction we provided the evidence for one of the crucial steps in the virus replication at which RNPs are directed to the site of the virus assembly. Finally we started analysis of the N protein RNA-binding region, which is supposedly located in the middle domain of the N protein molecule. We developed a model for the initial step of RNA-binding by the hantaviral N protein. We hypothesized that the hantaviral N protein possesses two secondary structure elements that initiate the RNA encapsidation. The results suggest that amino acid residues (172-176) presumably act as a hook to catch vRNA and that the positively charged interaction surface (aa residues 144-160) enhances the initial N-RNA interacation. In conclusion, we elucidated new functions of hantavirus N protein. Using in silico modeling we predicted the domain structure of the protein and using experimental techniques showed that each domain is responsible for executing certain function(s). We showed that intact N terminal coiled-coil domain is crucial for oligomerization and charged residues located on its surface form a interaction surface for the N monomers. The middle domain is essential for interaction with the cytoplasmic tail of the Gn protein and RNA binding.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Agriculture is an economic activity that heavily relies on the availability of natural resources. Through its role in food production agriculture is a major factor affecting public welfare and health, and its indirect contribution to gross domestic product and employment is significant. Agriculture also contributes to numerous ecosystem services through management of rural areas. However, the environmental impact of agriculture is considerable and reaches far beyond the agroecosystems. The questions related to farming for food production are, thus, manifold and of great public concern. Improving environmental performance of agriculture and sustainability of food production, sustainabilizing food production, calls for application of wide range of expertise knowledge. This study falls within the field of agro-ecology, with interphases to food systems and sustainability research and exploits the methods typical of industrial ecology. The research in these fields extends from multidisciplinary to interdisciplinary and transdisciplinary, a holistic approach being the key tenet. The methods of industrial ecology have been applied extensively to explore the interaction between human economic activity and resource use. Specifically, the material flow approach (MFA) has established its position through application of systematic environmental and economic accounting statistics. However, very few studies have applied MFA specifically to agriculture. The MFA approach was used in this thesis in such a context in Finland. The focus of this study is the ecological sustainability of primary production. The aim was to explore the possibilities of assessing ecological sustainability of agriculture by using two different approaches. In the first approach the MFA-methods from industrial ecology were applied to agriculture, whereas the other is based on the food consumption scenarios. The two approaches were used in order to capture some of the impacts of dietary changes and of changes in production mode on the environment. The methods were applied at levels ranging from national to sector and local levels. Through the supply-demand approach, the viewpoint changed between that of food production to that of food consumption. The main data sources were official statistics complemented with published research results and expertise appraisals. MFA approach was used to define the system boundaries, to quantify the material flows and to construct eco-efficiency indicators for agriculture. The results were further elaborated for an input-output model that was used to analyse the food flux in Finland and to determine its relationship to the economy-wide physical and monetary flows. The methods based on food consumption scenarios were applied at regional and local level for assessing feasibility and environmental impacts of relocalising food production. The approach was also used for quantification and source allocation of greenhouse gas (GHG) emissions of primary production. GHG assessment provided, thus, a means of crosschecking the results obtained by using the two different approaches. MFA data as such or expressed as eco-efficiency indicators, are useful in describing the overall development. However, the data are not sufficiently detailed for identifying the hot spots of environmental sustainability. Eco-efficiency indicators should not be bluntly used in environmental assessment: the carrying capacity of the nature, the potential exhaustion of non-renewable natural resources and the possible rebound effect need also to be accounted for when striving towards improved eco-efficiency. The input-output model is suitable for nationwide economy analyses and it shows the distribution of monetary and material flows among the various sectors. Environmental impact can be captured only at a very general level in terms of total material requirement, gaseous emissions, energy consumption and agricultural land use. Improving environmental performance of food production requires more detailed and more local information. The approach based on food consumption scenarios can be applied at regional or local scales. Based on various diet options the method accounts for the feasibility of re-localising food production and environmental impacts of such re-localisation in terms of nutrient balances, gaseous emissions, agricultural energy consumption, agricultural land use and diversity of crop cultivation. The approach is applicable anywhere, but the calculation parameters need to be adjusted so as to comply with the specific circumstances. The food consumption scenario approach, thus, pays attention to the variability of production circumstances, and may provide some environmental information that is locally relevant. The approaches based on the input-output model and on food consumption scenarios represent small steps towards more holistic systemic thinking. However, neither one alone nor the two together provide sufficient information for sustainabilizing food production. Environmental performance of food production should be assessed together with the other criteria of sustainable food provisioning. This requires evaluation and integration of research results from many different disciplines in the context of a specified geographic area. Foodshed area that comprises both the rural hinterlands of food production and the population centres of food consumption is suggested to represent a suitable areal extent for such research. Finding a balance between the various aspects of sustainability is a matter of optimal trade-off. The balance cannot be universally determined, but the assessment methods and the actual measures depend on what the bottlenecks of sustainability are in the area concerned. These have to be agreed upon among the actors of the area

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Customer value has been identified as “the reason” for customers to patronize a firm, and as one of the fundamental blocks that market exchanges build upon. Despite the importance of customer value, it is often poorly defined, or seems to refer to different phenomena. This dissertation contributes to current marketing literature by subjecting the value concept to a critical investigation, and by clarifying its conceptual foundation. Based on the literature review, it is proposed that customer value can be divided into two separate, but interrelated aspects: value creation processes, and value outcome determination. This means that on one hand, it is possible to examine those activities through which value is created, and on the other hand, investigate how customers determine the value outcomes they receive. The results further show that customers may determine value in four different ways: value as a benefit/sacrifice ratio, as experience outcomes, as means-end chains, and value as phenomenological. In value as benefit/sacrifice ratio, customers are expected to calculate the ratio between service benefits (e.g. ease of use) and sacrifices (e.g. price). In value as experience outcomes, customers are suggested to experience multiple value components, such as functional, emotional, or social value. Customer value as means-ends chains in turn models value in terms of the relationships between service characteristics, use value, and desirable ends (e.g. social acceptance). Finally, value as phenomenological proposes that value emerges from lived, holistic experiences. The empirical papers investigate customer value in e-services, including online health care and mobile services, and show how value in e-service stems from the process and content quality, use context, and the service combination that a customer uses. In conclusion, marketers should understand that different value definitions generate different types of understanding of customer value. In addition, it is clear that studying value from several perspectives is useful, as it enables a richer understanding of value for the different actors. Finally, the interconnectedness between value creation and determination is surprisingly little researched, and this dissertation proposes initial steps towards understanding the relationship between the two.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this paper is to improve option risk monitoring by examining the information content of implied volatility and by introducing the calculation of a single-sum expected risk exposure similar to the Value-at-Risk. The figure is calculated in two steps. First, there is a need to estimate the value of a portfolio of options for a number of different market scenarios, while the second step is to summarize the information content of the estimated scenarios into a single-sum risk measure. This involves the use of probability theory and return distributions, which confronts the user with the problems of non-normality in the return distribution of the underlying asset. Here the hyperbolic distribution is used to describe one alternative for dealing with heavy tails. Results indicate that the information content of implied volatility is useful when predicting future large returns in the underlying asset. Further, the hyperbolic distribution provides a good fit to historical returns enabling a more accurate definition of statistical intervals and extreme events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper extends current discussions about value creation and proposes a customer dominant value perspective. A customer-dominant marketing logic positions the customer in the center, rather than the service provider/producer or the interaction or the system. The focus is shifted from the company´s service processes involving the customer, to the customer´s multi-contextual value formation, involving the company. It is argued that value is not always an active process of creation; instead value is embedded and formed in the highly dynamic and multi-contextual reality and life of the customer. This leads to a need to look beyond the current line of visibility where visible customer-company interactions are focused to the invisible and mental life of the customer. From this follows a need to extend the temporal scope, from exchange and use even further to accumulated experiences in the customer´s life. The aim of this paper is to explore value formation from a customer dominant logic perspective. This is done in three steps: first, value formation is contrasted to earlier views on the company’s role in value creation by using a broad ontologically driven framework discussing what, how, when, where and who. Next, implications of the proposed characteristics of value formation compared to earlier approaches are put forward. Finally, some tentative suggestions of how this perspective would affect marketing in service companies are presented. As value formation in a CDL perspective has a different focus and scope than earlier views on value it leads to posing questions about the customer that reveals earlier hidden aspects of the role of a service for the customer. This insight might be used in service development and innovation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cosmopolitan ideals have been on the philosophical agenda for several millennia but the end of the Cold War started a new discussion on state sovereignty, global democracy, the role of international law and global institutions. The Westphalian state system in practice since the 17th century is transforming and the democracy deficit needs new solutions. An impetus has been the fact that in the present world, an international body representing global citizens does not exist. In this Master’s thesis, the possibility of establishing a world parliament is examined. In a case analysis, 17 models on world parliament from two journals, a volume of essays and two other publications are discussed. Based on general observations, the models are divided into four thematic groups. The models are analyzed with an emphasis on feasible and probable elements. Further, a new scenario with a time frame of thirty years is proposed based on the methodology of normative futures studies, taking special interest in causal relationships and actions leading to change. The scenario presents three gradual steps that each need to be realized before a sustainable world parliament is established. The theoretical framework is based on social constructivism, and changes in international and multi-level governance are examined with the concepts of globalization, democracy and sovereignty. A feasible, desirable and credible world parliament is constituted gradually by implying electoral, democratic and legal measures for members initially from exclusively democratic states, parliamentarians, non-governmental organizations and other groups. The parliament should be located outside the United Nations context, since a new body avoids the problem of inefficiency currently prevailing in the UN. The main objectives of the world parliament are to safeguard peace and international law and to offer legal advice in cases when international law has been violated. A feasible world parliament is advisory in the beginning but it is granted legislative powers in the future. The number of members in the world parliament could also be extended following the example of the EU enlargement process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growing interest for sequencing with higher throughput in the last decade has led to the development of new sequencing applications. This thesis concentrates on optimizing DNA library preparation for Illumina Genome Analyzer II sequencer. The library preparation steps that were optimized include fragmentation, PCR purification and quantification. DNA fragmentation was performed with focused sonication in different concentrations and durations. Two column based PCR purification method, gel matrix method and magnetic bead based method were compared. Quantitative PCR and gel electrophoresis in a chip were compared for DNA quantification. The magnetic bead purification was found to be the most efficient and flexible purification method. The fragmentation protocol was changed to produce longer fragments to be compatible with longer sequencing reads. Quantitative PCR correlates better with the cluster number and should thus be considered to be the default quantification method for sequencing. As a result of this study more data have been acquired from sequencing with lower costs and troubleshooting has become easier as qualification steps have been added to the protocol. New sequencing instruments and applications will create a demand for further optimizations in future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Earth s climate is a highly dynamic and complex system in which atmospheric aerosols have been increasingly recognized to play a key role. Aerosol particles affect the climate through a multitude of processes, directly by absorbing and reflecting radiation and indirectly by changing the properties of clouds. Because of the complexity, quantification of the effects of aerosols continues to be a highly uncertain science. Better understanding of the effects of aerosols requires more information on aerosol chemistry. Before the determination of aerosol chemical composition by the various available analytical techniques, aerosol particles must be reliably sampled and prepared. Indeed, sampling is one of the most challenging steps in aerosol studies, since all available sampling techniques harbor drawbacks. In this study, novel methodologies were developed for sampling and determination of the chemical composition of atmospheric aerosols. In the particle-into-liquid sampler (PILS), aerosol particles grow in saturated water vapor with further impaction and dissolution in liquid water. Once in water, the aerosol sample can then be transported and analyzed by various off-line or on-line techniques. In this study, PILS was modified and the sampling procedure was optimized to obtain less altered aerosol samples with good time resolution. A combination of denuders with different coatings was tested to adsorb gas phase compounds before PILS. Mixtures of water with alcohols were introduced to increase the solubility of aerosols. Minimum sampling time required was determined by collecting samples off-line every hour and proceeding with liquid-liquid extraction (LLE) and analysis by gas chromatography-mass spectrometry (GC-MS). The laboriousness of LLE followed by GC-MS analysis next prompted an evaluation of solid-phase extraction (SPE) for the extraction of aldehydes and acids in aerosol samples. These two compound groups are thought to be key for aerosol growth. Octadecylsilica, hydrophilic-lipophilic balance (HLB), and mixed phase anion exchange (MAX) were tested as extraction materials. MAX proved to be efficient for acids, but no tested material offered sufficient adsorption for aldehydes. Thus, PILS samples were extracted only with MAX to guarantee good results for organic acids determined by liquid chromatography-mass spectrometry (HPLC-MS). On-line coupling of SPE with HPLC-MS is relatively easy, and here on-line coupling of PILS with HPLC-MS through the SPE trap produced some interesting data on relevant acids in atmospheric aerosol samples. A completely different approach to aerosol sampling, namely, differential mobility analyzer (DMA)-assisted filter sampling, was employed in this study to provide information about the size dependent chemical composition of aerosols and understanding of the processes driving aerosol growth from nano-size clusters to climatically relevant particles (>40 nm). The DMA was set to sample particles with diameters of 50, 40, and 30 nm and aerosols were collected on teflon or quartz fiber filters. To clarify the gas-phase contribution, zero gas-phase samples were collected by switching off the DMA every other 15 minutes. Gas-phase compounds were adsorbed equally well on both types of filter, and were found to contribute significantly to the total compound mass. Gas-phase adsorption is especially significant during the collection of nanometer-size aerosols and needs always to be taken into account. Other aims of this study were to determine the oxidation products of β-caryophyllene (the major sesquiterpene in boreal forest) in aerosol particles. Since reference compounds are needed for verification of the accuracy of analytical measurements, three oxidation products of β-caryophyllene were synthesized: β-caryophyllene aldehyde, β-nocaryophyllene aldehyde, and β-caryophyllinic acid. All three were identified for the first time in ambient aerosol samples, at relatively high concentrations, and their contribution to the aerosol mass (and probably growth) was concluded to be significant. Methodological and instrumental developments presented in this work enable fuller understanding of the processes behind biogenic aerosol formation and provide new tools for more precise determination of biosphere-atmosphere interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aerosol particles have effect on climate, visibility, air quality and human health. However, the strength of which aerosol particles affect our everyday life is not well described or entirely understood. Therefore, investigations of different processes and phenomena including e.g. primary particle sources, initial steps of secondary particle formation and growth, significance of charged particles in particle formation, as well as redistribution mechanisms in the atmosphere are required. In this work sources, sinks and concentrations of air ions (charged molecules, cluster and particles) were investigated directly by measuring air molecule ionising components (i.e. radon activity concentrations and external radiation dose rates) and charged particle size distributions, as well as based on literature review. The obtained results gave comprehensive and valuable picture of the spatial and temporal variation of the air ion sources, sinks and concentrations to use as input parameters in local and global scale climate models. Newly developed air ion spectrometers (Airel Ltd.) offered a possibility to investigate atmospheric (charged) particle formation and growth at sub-3 nm sizes. Therefore, new visual classification schemes for charged particle formation events were developed, and a newly developed particle growth rate method was tested with over one year dataset. These data analysis methods have been widely utilised by other researchers since introducing them. This thesis resulted interesting characteristics of atmospheric particle formation and growth: e.g. particle growth may sometimes be suppressed before detection limit (~ 3 nm) of traditional aerosol instruments, particle formation may take place during daytime as well as in the evening, growth rates of sub-3 nm particles were quite constant throughout the year while growth rates of larger particles (3-20 nm in diameter) were higher during summer compared to winter. These observations were thought to be a consequence of availability of condensing vapours. The observations of this thesis offered new understanding of the particle formation in the atmosphere. However, the role of ions in particle formation, which is not well understood with current knowledge, requires further research in future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of soil microbiota and their activities is central to the understanding of many ecosystem processes such as decomposition and nutrient cycling. The collection of microbiological data from soils generally involves several sequential steps of sampling, pretreatment and laboratory measurements. The reliability of results is dependent on reliable methods in every step. The aim of this thesis was to critically evaluate some central methods and procedures used in soil microbiological studies in order to increase our understanding of the factors that affect the measurement results and to provide guidance and new approaches for the design of experiments. The thesis focuses on four major themes: 1) soil microbiological heterogeneity and sampling, 2) storage of soil samples, 3) DNA extraction from soil, and 4) quantification of specific microbial groups by the most-probable-number (MPN) procedure. Soil heterogeneity and sampling are discussed as a single theme because knowledge on spatial (horizontal and vertical) and temporal variation is crucial when designing sampling procedures. Comparison of adjacent forest, meadow and cropped field plots showed that land use has a strong impact on the degree of horizontal variation of soil enzyme activities and bacterial community structure. However, regardless of the land use, the variation of microbiological characteristics appeared not to have predictable spatial structure at 0.5-10 m. Temporal and soil depth-related patterns were studied in relation to plant growth in cropped soil. The results showed that most enzyme activities and microbial biomass have a clear decreasing trend in the top 40 cm soil profile and a temporal pattern during the growing season. A new procedure for sampling of soil microbiological characteristics based on stratified sampling and pre-characterisation of samples was developed. A practical example demonstrated the potential of the new procedure to reduce the analysis efforts involved in laborious microbiological measurements without loss of precision. The investigation of storage of soil samples revealed that freezing (-20 °C) of small sample aliquots retains the activity of hydrolytic enzymes and the structure of the bacterial community in different soil matrices relatively well whereas air-drying cannot be recommended as a storage method for soil microbiological properties due to large reductions in activity. Freezing below -70 °C was the preferred method of storage for samples with high organic matter content. Comparison of different direct DNA extraction methods showed that the cell lysis treatment has a strong impact on the molecular size of DNA obtained and on the bacterial community structure detected. An improved MPN method for the enumeration of soil naphthalene degraders was introduced as an alternative to more complex MPN protocols or the DNA-based quantification approach. The main advantage of the new method is the simple protocol and the possibility to analyse a large number of samples and replicates simultaneously.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atomic layer deposition (ALD) is a method to deposit thin films from gaseous precursors to the substrate layer-by-layer so that the film thickness can be tailored with atomic layer accuracy. Film tailoring is even further emphasized with selective-area ALD which enables the film growth to be controlled also on the substrate surface. Selective-area ALD allows the decrease of a process steps in preparing thin film devices. This can be of a great technological importance when the ALD films become into wider use in different applications. Selective-area ALD can be achieved by passivation or activation of a surface. In this work ALD growth was prevented by octadecyltrimethoxysilane, octadecyltrichlorosilane and 1-dodecanethiol SAMs, and by PMMA (polymethyl methacrylate) and PVP (poly(vinyl pyrrolidone) polymer films. SAMs were prepared from vapor phase and by microcontact printing, and polymer films were spin coated. Microcontact printing created patterned SAMs at once. The SAMs prepared from vapor phase and the polymer mask layers were patterned by UV lithography or lift-off process so that after preparation of a continuous mask layer selected areas of them were removed. On these areas the ALD film was deposited selectively. SAMs and polymer films prevented the growth in several ALD processes such as iridium, ruthenium, platinum, TiO2 and polyimide so that the ALD films did grow only on areas without SAM or polymer mask layer. PMMA and PVP films also protected the surface against Al2O3 and ZrO2 growth. Activation of the surface for ALD of ruthenium was achieved by preparing a RuOX layer by microcontact printing. At low temperatures the RuCp2-O2 process nucleated only on this oxidative activation layer but not on bare silicon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hantaviruses (family Bunyaviridae, genus Hantavirus) are enveloped viruses incorporating a segmented, negative-sense RNA genome. Each hantavirus is carried by its specific host, either a rodent or an insectivore (shrew), in which the infection is asymptomatic and persistent. In humans, hantaviruses cause Hemorrhagic fever with renal syndrome (HFRS) in Eurasia and Hantavirus cardiopulmonary syndrome (HCPS) in the Americas. In Finland, Puumala virus (genus Hantavirus) is the causative agent of NE, a mild form of HFRS. The HFRS-type diseases are often associated with renal failure and proteinuria that might be mechanistically explained by infected kidney tubular cell degeneration in patients. Previously, it has been shown that non-pathogenic hantavirus, Tula virus (TULV), could cause programmed cell death, apoptosis, in cell cultures. This suggested that the infected kidney tubular degeneration could be caused directly by virus replication. In the first paper of this thesis the molecular mechanisms involved in TULV-induced apoptosis was further elucidated. A virus replication-dependent down-regulation of ERK1/2, concomitantly with the induced apoptosis, was identified. In addition, this phenomenon was not restricted to TULV or to non-pathogenic hantaviruses in general since also a pathogenic hantavirus, Seoul virus, could inhibit ERK1/2 activity. Hantaviruses consist of membrane-spanning glycoproteins Gn and Gc, RNA-dependent RNA polymerase (L protein) and nucleocapsid protein N, which encapsidates the viral genome, and thus forms the ribonucleoprotein (RNP). Interaction between the cytoplasmic tails of viral glycoproteins and RNP is assumed to be the only means how viral genetic material is incorporated into infectious virions. In the second paper of this thesis, it was shown by immunoprecipitation that viral glycoproteins and RNP interact in the purified virions. It was further shown that peptides derived from the cytoplasmic tails (CTs) of both Gn and Gc could bind RNP and recombinant N protein. In the fourth paper the cytoplamic tail of Gn but not Gc was shown to interact with genomic RNA. This interaction was probably rather unspecific since binding of Gn-CT with unrelated RNA and even single-stranded DNA were also observed. However, since the RNP consists of both N protein and N protein-encapsidated genomic RNA, it is possible that the viral genome plays a role in packaging of RNPs into virions. On the other hand, the nucleic acid-binding activity of Gn may have importance in the synthesis of viral RNA. Binding sites of Gn-CT with N protein or nucleic acids were also determined by peptide arrays, and they were largely found to overlap. The Gn-CT of hantaviruses contain a conserved zinc finger (ZF) domain with an unknown function. Some viruses need ZFs in entry or post-entry steps of the viral life cycle. Cysteine residues are required for the folding of ZFs by coordinating zinc-ions, and alkylation of these residues can affect virus infectivity. In the third paper, it was shown that purified hantavirions could be inactivated by treatment with cysteine-alkylating reagents, especially N-ethyl maleimide. However, the effect could not be pin-pointed to the ZF of Gn-CT since also other viral proteins reacted with maleimides, and it was, therefore, impossible to exclude the possibility that other cysteines besides those that were essential in the formation of ZF are required for hantavirus infectivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kirjallisuustutkimuksen tavoitteena oli perehtyä kasvihuoneilmiön taustoihin ja kartoittaa aiempia tutkimuksia naudan- ja muiden lihatuotteiden kasvihuonekaasupäästöistä. Lisäksi kirjallisuustutkimuksessa perehdyttiin aiemmissa tutkimuksissa elintarvikkeiden hiilijalanjäljen laskemisessa sovellettuun elinkaarianalyysiin ISO 14040-standardin mukaisesti. Kokeellisen osion tavoitteena oli määrittää naudanlihan hiilijalanjälki Suomessa maatilan portilta kuluttajan ruokapöytään. Tavoitteena oli myös ymmärtää jalostusketjun päästöjen merkitys verrattuna koko naudanlihan tuotantoketjuun ja määrittää jalostusketjun vaiheiden merkitys ketjussa. Työn toiminnallisena yksikkönä toimi kilo naudanlihaa. Työ toteutettiin perehtymällä yksityiskohtaisesti yhteen naudanlihan jalostusketjuun Suomessa. Päästöt laskettiin todellisten yhteistyöyritykseltä saatujen prosessitietojen perusteella. Tiedot kerättiin tiedonkeruulomakkeella vierailemalla yhteistyöyrityksen kahdessa tuotantolaitoksessa ja täydentämällä tietoja haastatteluilla. Naudanlihan jalostusketjun päästöt olivat 1240 g CO2-ekv/lihakilo. Eniten päästöjä tuottivat jalostusvaihe (310 g CO2-ekv/lihakilo), teurastus (280 g CO2-ekv/lihakilo) ja lihatuotteiden kuljetus kuluttajalle (210 g CO2-ekv/lihakilo). Koko naudanlihan tuotantoketjusta jalostusketjun päästöt muodostivat alle 4 %, sillä syntymästä maatilan portille syntyviksi päästöiksi laskettiin kirjallisuuden perusteella yli 30 000 g CO2-ekv/lihakilo. Jatkossa naudanlihan hiilijalanjälkeä voitaisiin pääasiassa pienentää kehittämällä prosessia maatilan portille asti. Tämän työn tulokset olivat hyvin samansuuruiset verrattuna aiempaan tutkimukseen broilerin jalostusketjun päästöistä Suomessa (Katajajuuri ym. 2008). Tämä vastasi ennakko-odotuksia, sillä jalostusketjujen vaiheissa ei ollut merkittäviä eroja. Aiempia tutkimuksia naudanlihan jalostusketjun päästöistä ei ollut saatavilla.