25 resultados para Photothermal transparent transducer
em Helda - Digital Repository of University of Helsinki
Resumo:
Language software applications encounter new words, e.g., acronyms, technical terminology, names or compounds of such words. In order to add new words to a lexicon, we need to indicate their inflectional paradigm. We present a new generally applicable method for creating an entry generator, i.e. a paradigm guesser, for finite-state transducer lexicons. As a guesser tends to produce numerous suggestions, it is important that the correct suggestions be among the first few candidates. We prove some formal properties of the method and evaluate it on Finnish, English and Swedish full-scale transducer lexicons. We use the open-source Helsinki Finite-State Technology to create finitestate transducer lexicons from existing lexical resources and automatically derive guessers for unknown words. The method has a recall of 82-87 % and a precision of 71-76 % for the three test languages. The model needs no external corpus and can therefore serve as a baseline.
Resumo:
Tutkielma käsittelee Bo Carpelanin romaania Alkutuuli (Urwind 1993) päähenkilö Daniel Urwindin fragmentaarisesti kerrottuna elämäntarinana. Kyse on kahdella aikatasolla etenevästä fiktiivisestä omaelämäkerrasta, joka perustuu viikottaisiin päiväkirjamerkintöihin. Daniel hahmottaa identiteettiään tilaan kytkeytyneiden muistojen sekä oman nimensä avulla leikaten samalla kerronnan jatkuvuuden. Tutkimusongelmana on hahmottaa Danielin kerronnan tavat, kirjoittamisen syyt sekä prosessiin lopputulos. Menneisyyteen kohdistuvan minä-kerronnan analyysi perustuu Dorrit Cohnin Transparent minds -teoksen käsitteistöön. Alkutuulessa esiintyy muistikerronnan ja muistimonologin epäsäännöllistä vuorottelua. Niiden osana on edesmenneiden läheisten upotettuja monologeja, joiden takaa kuultaa Danielin ääni: monologit ovat Danielin kuvittelemia tai referoimia puheita ja ajatuksia, joiden esittäjä on hänen yksinäisen keskustelunsa toinen osapuoli. Omaelämäkertaa käsittelevinä teoreettisina lähteinä ovat ensisijaisesti Päivi Kososen artikkelit sekä hänen tutkimuksensa Elämät sanoissa. Tutkielman tulkinnan kannalta keskeisessä asemassa ovat Bo Carpelanin essee- ja kaunokirjallinen tuotanto. Kirjoittamisen lähtökohtana on päähenkilön identiteettiä horjuttava elämänvaihe. Vaimo Marian lähdettyä vuodeksi Amerikkaan Daniel pyrkii löytämään oman kielensä ja luomaan muistojensa avulla eheämmän kuvan itsestään. Prosessiin kytkeytyy keskeisesti tila - Danielin kulkiessa talossaan, joka on hänen lapsuudenkotinsa, aistitodellisuus toimii mieleenpalauttajana: äänet ja tuoksut johdattavat hänet elämään menneisyyden muistoja uudelleen nykyhetkessä. Samalla talo ja huoneet personifioituvat ja minuus alkaa rakentua tilan kaltaiseksi. Uusien ovien ja huoneiden löytyminen symboloi Danielin muistojen selkeytymistä ja itseymmärryksen syvenemistä. Kirjoittamisen voi tulkita olevan psykoanalyyttinen prosessi, jossa omaelämäkerran minä keskustelee menneisyyden minänsä kanssa. Puhuja odottaa vastausta toiselta, vaikka vastaus löytyy paradoksaalisesti itsestä. Omaelämäkerran moniäänisyyttä korostavat edesmenneiden läheisten läsnäolo, Danielin eri ikävaiheiden identiteetit ja kaksoisolentojen hahmoissa esiintyvät minuudet. Kirjoittamisen avulla Daniel pyrkii pääsemään etäämmälle itsestään ja saavuttamaan minättömyyden tilan, jota käsitellään John Keatsilta peräisin olevan käsitteen "negatiivinen kyky" avulla. Alkutuuli on taiteilijaromaani, jossa taiteilijuutta edustavat siivet ja lentäminen kytkeytyvät mielikuvitukseen ja uusiutumiseen. Danielin tie taiteilijuutta edeltävään minättömyyteen on kuitenkin vaikea ja monet merkit viittaavat diletantin kohtaloon. Unenomaisen logiikan omaavan, fragmentaarisen kerronnan takaa on hahmotettavissa myös lainalaisuuksia. Muiden henkilöiden lyhyet elämäkerrat ovat heidän haavansa paljastavia episodeja, jotka muuttuvat osaksi Danielin elämäntarinaa. Danielin kirjallinen omakuva on montaasi, jossa eri tasossa olevat elementit, nykyhetki ja menneisyys, mielikuvitus, unet ja upotetut monologit rinnastuvat samaan tasoon. Daniel peilaa elettyä ja koettua koomisen peilirakenteen avulla, jossa hänen elämäänsä osallisena olleet menneisyyden henkilöt kulkevat narrikulkueen mukana. Toive eheästä minäkuvasta osoittautuu kuitenkin mahdottomaksi merkityssulkeuman todenvastaisuuden vuoksi. Urwind-nimen pohdinnasta liikkeelle lähtenyt omaelämäkerta päättyy syklistä liikettä mukaillen tilanteeseen, jossa päähenkilö hyväksyy elämän käsittämättömyyden ja luottaa omassa nimessään piilevään voimaan, tuuleen, viitaten samalla myös romaanin nimeen. Avainsanat: Bo Carpelan - muistot - tila - fragmentaarisuus - fiktiivinen omaelämäkerta
Resumo:
This thesis combines a computational analysis of a comprehensive corpus of Finnish lake names with a theoretical background in cognitive linguistics. The combination results on the one hand in a description of the toponymic system and the processes involved in analogy-based naming and on the other hand some adjustments to Construction Grammar. Finnish lake names are suitable for this kind of study, as they are to a large extent semantically transparent even when relatively old. There is also a large number of them, and they are comprehensively collected in a computer database. The current work starts with an exploratory computational analysis of co-location patterns between different lake names. Such an analysis makes it possible to assess the importance of analogy and patterns in naming. Prior research has suggested that analogy plays an important role, often also in cases where there are other motivations for the name, and the current study confirms this. However, it also appears that naming patterns are very fuzzy and that their nature is somewhat hard to define in an essentially structuralist tradition. In describing toponymic structure and the processes involved in naming, cognitive linguistics presents itself as a promising theoretical basis. The descriptive formalism of Construction Grammar seems especially well suited for the task. However, now productivity becomes a problem: it is not nearly as clear-cut as the latter theory often assumes, and this is even more apparent in names than in more traditional linguistic material. The varying degree of productivity is most naturally described by a prototype-based theory. Such an approach, however, requires some adjustments to onstruction Grammar. Based on all this, the thesis proposes a descriptive model where a new name -- or more generally, a new linguistic expression -- can be formed by conceptual integration from either a single prior example or a construction generalised from a number of different prior ones. The new model accounts nicely for various aspects of naming that are problematic for the traditional description based on analogy and patterns.
Resumo:
Le naturalisme finlandais. Une conception entropique du quotidien. Finnish Naturalism. An Entropic Conception of Everyday Life. Nineteenth century naturalism was a strikingly international literary movement. After emerging in France in the 1870s, it spread all over Europe including young, small nations with a relatively recent literary tradition, such as Finland. This thesis surveys the role and influence of French naturalism on the Finnish literature of the 1880s and 1890s. On the basis of a selection of works of six Finnish authors (Juhani Aho, Minna Canth, Kauppis-Heikki, Teuvo Pakkala, Ina Lange and Karl August Tavaststjerna), the study establishes a view of the main features of Finnish naturalism in comparison with that of French authors, such as Zola, Maupassant and Flaubert. The study s methodological framework is genre theory: even though naturalist writers insisted on a transparent description of reality, naturalist texts are firmly rooted in general generic categories with definable relations and constants on which European novels impose variations. By means of two key concepts, entropy and everyday life , this thesis establishes the parameters of the naturalist genre. At the heart of the naturalist novel is a movement in the direction of disintegration and confusion, from order to disorder, from illusion to disillusion. This entropic vision is merged into the representation of everyday life, focusing on socially mediocre characters and discovering their miseries in all their banality and daily grayness. By using Mikhail Bakhtin s idea of literary genres as a means of understanding experience, this thesis suggests that everyday life is an ideological core of naturalist literature that determines not only its thematic but also generic distinctions: with relation to other genres, such as to Balzac s realism, naturalism appears primarily to be a banalization of everyday life. In idyllic genres, everyday life can be represented by means of sublimation, but a naturalist novel establishes a distressing, negative everyday life and thus strives to take a critical view of the modern society. Beside the central themes, the study surveys the generic blends in naturalism. The thesis analyzes how the coalition of naturalism and the melodramatic mode in the work of Minna Canth serves naturalisms ambition to discover the unconscious instincts underlying daily realities, and how the symbolic mode in the work of Juhani Aho duplicates the semantic level of the apparently insignificant, everyday naturalist details. The study compares the naturalist novel to the ideological novel (roman à these) and surveys the central dilemma of naturalism, the confrontation between the optimistic belief in social reform and the pessimistic theory of determinism. The thesis proposes that the naturalist novel s contribution to social reform lies in its shock effect. By means of representing the unpleasant truth the entropy of everyday life it aims to scandalize the reader and make him aware of the harsh realities that might apply also to him.
Resumo:
The purpose of this research was to examine teacher’s pedagogical thinking based on beliefs. It aimed to investigate and identify beliefs from teachers’ speech when they were reflecting their own teaching. Placement of beliefs in levels of pedagogical thinking was also examined. The second starting point for a study was the Instrumental Enrichment -intervention, which aims to enhance learning potential and cognitive functioning of students. The goal of this research was to investigate how five main principles of the intervention come forward in teachers’ thinking. Specifying research question was: how similar teachers’ beliefs are to the main principles of intervention. The teacher-thinking paradigm provided the framework for this study. The essential concepts of this study are determined exactly in the theoretical framework. Model of pedagogical thinking was important in the examination of teachers’ thinking. Beliefs were approached through the referencing of varied different theories. Feuerstein theory of Structural cognitive modifiability and Mediated learning experience completed the theory of teacher thinking. The research material was gathered in two parts. In the first part two mathematics lessons of three class teachers were videotaped. In second part the teachers were interviewed by using a stimulated recall method. Interviews were recorded and analysed by qualitative content analysis. Teachers’ beliefs were divided in themes and contents of these themes were described. This part of analysis was inductive. Second part was deductive and it was based on theories of pedagogical thinking levels and Instrumental Enrichment -intervention. According to the research results, three subcategories of teachers’ beliefs were found: beliefs about learning, beliefs about teaching and beliefs about students. When the teachers discussed learning, they emphasized the importance of understanding. In teaching related beliefs student-centrality was highlighted. The teachers also brought out some demands for good education. They were: clarity, diversity and planning. Beliefs about students were divided into two groups. The teachers believed that there are learning differences between students and that students have improved over the years. Because most of the beliefs were close to practice and related to concrete classroom situation, they were situated in Action level of pedagogical thinking. Some teaching and learning related beliefs of individual teachers were situated in Object theory level. Metatheory level beliefs were not found. Occurrence of main principles of intervention differed between teachers. They were much more consistent and transparent in the beliefs of one teacher than of the other two teachers. Differences also occurred between principles. For example reciprocity came up in every teacher’s beliefs, but modifiability was only found in the beliefs of one teacher. Results of this research were consistent with other research made in the field. Teachers’ beliefs about teaching were individual. Even though shared themes were found, the teachers emphasized different aspects of their work. Occurrence of beliefs that were in accordance with the intervention were teacher-specific. Inconsistencies were also found within teachers and their individual beliefs.
Resumo:
Parkinson’s disease (PD) is the second most common neurodegenerative disease among the elderly. Its etiology is unknown and no disease-modifying drugs are available. Thus, more information concerning its pathogenesis is needed. Among other genes, mutated PTEN-induced kinase 1 (PINK1) has been linked to early-onset and sporadic PD, but its mode of action is poorly understood. Most animal models of PD are based on the use of the neurotoxin 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP). MPTP is metabolized to MPP+ by monoamine oxidase B (MAO B) and causes cell death of dopaminergic neurons in the substantia nigra in mammals. Zebrafish has been a widely used model organism in developmental biology, but is now emerging as a model for human diseases due to its ideal combination of properties. Zebrafish are inexpensive and easy to maintain, develop rapidly, breed in large quantities producing transparent embryos, and are readily manipulated by various methods, particularly genetic ones. In addition, zebrafish are vertebrate animals and results derived from zebrafish may be more applicable to mammals than results from invertebrate genetic models such as Drosophila melanogaster and Caenorhabditis elegans. However, the similarity cannot be taken for granted. The aim of this study was to establish and test a PD model using larval zebrafish. The developing monoaminergic neuronal systems of larval zebrafish were investigated. We identified and classified 17 catecholaminergic and 9 serotonergic neuron populations in the zebrafish brain. A 3-dimensional atlas was created to facilitate future research. Only one gene encoding MAO was found in the zebrafish genome. Zebrafish MAO showed MAO A-type substrate specificity, but non-A-non-B inhibitor specificity. Distribution of MAO in larval and adult zebrafish brains was both diffuse and distinctly cellular. Inhibition of MAO during larval development led to markedly elevated 5-hydroxytryptamine (serotonin, 5-HT) levels, which decreased the locomotion of the fish. MPTP exposure caused a transient loss of cells in specific aminergic cell populations and decreased locomotion. MPTP-induced changes could be rescued by the MAO B inhibitor deprenyl, suggesting a role for MAO in MPTP toxicity. MPP+ affected only one catecholaminergic cell population; thus, the action of MPP+ was more selective than that of MPTP. The zebrafish PINK1 gene was cloned in zebrafish, and morpholino oligonucleotides were used to suppress its expression in larval zebrafish. The functional domains and expression pattern of zebrafish PINK1 resembled those of other vertebrates, suggesting that zebrafish is a feasible model for studying PINK1. Translation inhibition resulted in cell loss of the same catecholaminergic cell populations as MPTP and MPP+. Inactivation of PINK1 sensitized larval zebrafish to subefficacious doses of MPTP, causing a decrease in locomotion and cell loss in one dopaminergic cell population. Zebrafish appears to be a feasible model for studying PD, since its aminergic systems, mode of action of MPTP, and functions of PINK1 resemble those of mammalians. However, the functions of zebrafish MAO differ from the two forms of MAO found in mammals. Future studies using zebrafish PD models should utilize the advantages specific to zebrafish, such as the ability to execute large-scale genetic or drug screens.
Resumo:
Studies in both vertebrates and invertebrates have identified proteins of the Hedgehog (Hh) family of secreted signaling molecules as key organizers of tissue patterning. Initially discovered in Drosophila in 1992, Hh family members have been discovered in animals with body plans as diverse as those of mammals, insects and echinoderms. In humans three related Hh genes have been identified: Sonic, Indian and Desert hedgehog (Shh, Ihh and Dhh). Transduction of the Hh signal to the cytoplasm utilizes an unusual mechanism involving consecutive repressive interactions between Hh and its receptor components, Patched (Ptc) and Smoothened (Smo). Several cytoplasmic proteins involved in Hh signal transduction are known in Drosophila, but mammalian homologs are known only for the Cubitus interruptus (Ci) transcription factor (GLI(1-3)) and for the Ci/GLI-associated protein, Suppressor of Fused (Su(fu)). In this study I analyzed the mechanisms of how the Hh receptor Ptc regulates the signal transducer Smo, and how Smo relays the Shh signal from the cell surface to the cytoplasm ultimately leading to the activation of GLI transcription factors. In Drosophila, the kinesin-like protein Costal2 (Cos2) is required for suppression of Hh target gene expression in the absence of ligand, and loss of Cos2 causes embryonic lethality. Cos2 acts by bridging Smo to the Ci. Another protein, Su(Fu) exerts a weak suppressive influence on Ci activity and loss of Su(Fu) causes subtle changes in Drosophila wing pattern. This study revealed that domains in Smo that are critical for Cos2 binding in Drosophila are dispensable for mammalian Smo function. Furthermore, by analyzing the function of Su(Fu) and the closest mouse homologs of Cos2 by protein overexpression and RNA interference I found that inhibition of the Hh response pathway in the absence of ligand does not require Cos2 activity, but instead critically depends on the activity of Su(Fu). These results indicate that a major change in the mechanism of action of a conserved signaling pathway occurred during evolution, probably through phenotypic drift made possible by the existence in some species of two parallel pathways acting between the Hh receptor and the Ci/GLI transcription factors. In a second approach to unravel Hh signaling we cloned > 90% of all human full-length protein kinase cDNAs and constructed the corresponding kinase-activity deficient mutants. Using this kinome resource as a screening tool, two kinases, MAP3K10 and DYRK2 were found to regulate Shh signaling. DYRK2 directly phosphorylated and induced the proteasome dependent degradation of the key Hh-pathway regulated transcription factor, GLI2. MAP3K10, in turn, affected GLI2 indirectly by modulating the activity of DYRK2.
Resumo:
Department of Forest Resource Management in the University of Helsinki has in years 2004?2007 carried out so-called SIMO -project to develop a new generation planning system for forest management. Project parties are organisations doing most of Finnish forest planning in government, industry and private owned forests. Aim of this study was to find out the needs and requirements for new forest planning system and to clarify how parties see targets and processes in today's forest planning. Representatives responsible for forest planning in each organisation were interviewed one by one. According to study the stand-based system for managing and treating forests continues in the future. Because of variable data acquisition methods with different accuracy and sources, and development of single tree interpretation, more and more forest data is collected without field work. The benefits of using more specific forest data also calls for use of information units smaller than tree stand. In Finland the traditional way to arrange forest planning computation is divided in two elements. After updating the forest data to present situation every stand unit's growth is simulated with different alternative treatment schedule. After simulation, optimisation selects for every stand one treatment schedule so that the management program satisfies the owner's goals in the best possible way. This arrangement will be maintained in the future system. The parties' requirements to add multi-criteria problem solving, group decision support methods as well as heuristic and spatial optimisation into system make the programming work more challenging. Generally the new system is expected to be adjustable and transparent. Strict documentation and free source code helps to bring these expectations into effect. Variable growing models and treatment schedules with different source information, accuracy, methods and the speed of processing are supposed to work easily in system. Also possibilities to calibrate models regionally and to set local parameters changing in time are required. In future the forest planning system will be integrated in comprehensive data management systems together with geographic, economic and work supervision information. This requires a modular method of implementing the system and the use of a simple data transmission interface between modules and together with other systems. No major differences in parties' view of the systems requirements were noticed in this study. Rather the interviews completed the full picture from slightly different angles. In organisation the forest management is considered quite inflexible and it only draws the strategic lines. It does not yet have a role in operative activity, although the need and benefits of team level forest planning are admitted. Demands and opportunities of variable forest data, new planning goals and development of information technology are known. Party organisations want to keep on track with development. One example is the engagement in extensive SIMO-project which connects the whole field of forest planning in Finland.
Resumo:
Mannans are abundant plant polysaccharides found in the endosperm of certain leguminous seeds (guar gum galactomannan, GG; locust bean gum galactomannan, LBG), in the tuber of the konjac plant (konjac glucomannan, KGM), and in softwoods (galactoglucomannan, GGM). This study focused on the effects of the chemical structure of mannans on their film-forming and emulsion-stabilizing properties. Special focus was on spruce GGM, which is an interesting new product from forest biorefineries. A plasticizer was needed for the formation of films from mannans other than KGM and the optimal proportion was 40% (w/w of polymers) glycerol or sorbitol. Galactomannans with lower galactose content (LBG, modified GG) produced films with higher elongation at break and tensile strength. The mechanical properties of GG-based films were improved by decreasing the degree of polymerization of the polysaccharide with moderate mannanase treatments. The improvement of mechanical properties of GGM-based films was sought by blending GGM with each of poly(vinyl alcohol) (PVOH), corn arabinoxylan (cAX), and KGM. Adding other polymers increased the elongation at break of GGM blend films. The tensile strength of films increased with increasing amounts of PVOH and KGM, but the effect of cAX was the opposite. Dynamic mechanical analysis showed two separate loss modulus peaks for blends of GGM and PVOH, but a single peak for all other films. Optical and scanning electron microscopy confirmed good miscibility of GGM with cAX and KGM. In contrast, films blended from GGM and PVOH showed phase separation. GGM and KGM were mixed with cellulose nanowhiskers (CNW) to form composite films. Addition of CNW to KGM-based films induced the formation of fiberlike structures with lengths of several millimeters. In GGM-based films, rodlike structures with lengths of tens of micrometers were formed. Interestingly, the notable differences in the film structure did not appear to be related to the mechanical and thermal properties of the films. Permeability properties of GGM-based films were compared to those of films from commercial mannans KGM, GG, and LBG. GGM-based films had the lowest water vapor permeability when compared to films from other mannans. The oxygen permeability of GGM films was of the same magnitude as that of commercial polyethylene / ethylene vinyl alcohol / polyethylene laminate film. The aroma permeability of GGM films was low. All films were transparent in the visible region, but GGM films blocked the light transmission in the ultraviolet region of the spectra. The stabilizing effect of GGM on a model beverage emulsion system was studied and compared to that of GG, LBG, KGM, and cAX. In addition, GG was enzymatically modified in order to examine the effect of the degree of polymerization and the degree of substitution of galactomannans on emulsion stability. Use of GGM increased the turbidity of emulsions both immediately after preparation and after storage of up to 14 days at room temperature. GGM emulsions had higher turbidity than the emulsions containing other mannans. Increasing the storage temperature to +45 ºC led to rapid emulsion breakdown, but a decrease in storage temperature increased emulsion stability after 14 days. A low degree of polymerization and a high degree of substitution of the modified galactomannans were associated with a decrease in emulsion turbidity.
Resumo:
Photocatalytic TiO2 thin films can be highly useful in many environments and applications. They can be used as self-cleaning coatings on top of glass, tiles and steel to reduce the amount of fouling on these surfaces. Photocatalytic TiO2 surfaces have antimicrobial properties making them potentially useful in hospitals, bathrooms and many other places where microbes may cause problems. TiO2 photocatalysts can also be used to clean contaminated water and air. Photocatalytic oxidation and reduction reactions proceed on TiO2 surfaces under irradiation of UV light meaning that sunlight and even normal indoor lighting can be utilized. In order to improve the photocatalytic properties of TiO2 materials even further, various modification methods have been explored. Doping with elements such as nitrogen, sulfur and fluorine, and preparation of different kinds of composites are typical approaches that have been employed. Photocatalytic TiO2 nanotubes and other nanostructures are gaining interest as well. Atomic Layer Deposition (ALD) is a chemical gas phase thin film deposition method with strong roots in Finland. This unique modification of the common Chemical Vapor Deposition (CVD) method is based on alternate supply of precursor vapors to the substrate which forces the film growth reactions to proceed only on the surface in a highly controlled manner. ALD gives easy and accurate film thickness control, excellent large area uniformity and unparalleled conformality on complex shaped substrates. These characteristics have recently led to several breakthroughs in microelectronics, nanotechnology and many other areas. In this work, the utilization of ALD to prepare photocatalytic TiO2 thin films was studied in detail. Undoped as well as nitrogen, sulfur and fluorine doped TiO2 thin films were prepared and thoroughly characterized. ALD prepared undoped TiO2 films were shown to exhibit good photocatalytic activities. Of the studied dopants, sulfur and fluorine were identified as much better choices than nitrogen. Nanostructured TiO2 photocatalysts were prepared through template directed deposition on various complex shaped substrates by exploiting the good qualities of ALD. A clear enhancement in the photocatalytic activity was achieved with these nanostructures. Several new ALD processes were also developed in this work. TiO2 processes based on two new titanium precursors, Ti(OMe)4 and TiF4, were shown to exhibit saturative ALD-type of growth when water was used as the other precursor. In addition, TiS2 thin films were prepared for the first time by ALD using TiCl4 and H2S as precursors. Ti1-xNbxOy and Ti1-xTaxOy transparent conducting oxide films were prepared successfully by ALD and post-deposition annealing. Highly unusual, explosive crystallization behaviour occurred in these mixed oxides which resulted in anatase crystals with lateral dimensions over 1000 times the film thickness.
Resumo:
Industrial ecology is an important field of sustainability science. It can be applied to study environmental problems in a policy relevant manner. Industrial ecology uses ecosystem analogy; it aims at closing the loop of materials and substances and at the same time reducing resource consumption and environmental emissions. Emissions from human activities are related to human interference in material cycles. Carbon (C), nitrogen (N) and phosphorus (P) are essential elements for all living organisms, but in excess have negative environmental impacts, such as climate change (CO2, CH4 N2O), acidification (NOx) and eutrophication (N, P). Several indirect macro-level drivers affect emissions change. Population and affluence (GDP/capita) often act as upward drivers for emissions. Technology, as emissions per service used, and consumption, as economic intensity of use, may act as drivers resulting in a reduction in emissions. In addition, the development of country-specific emissions is affected by international trade. The aim of this study was to analyse changes in emissions as affected by macro-level drivers in different European case studies. ImPACT decomposition analysis (IPAT identity) was applied as a method in papers I III. The macro-level perspective was applied to evaluate CO2 emission reduction targets (paper II) and the sharing of greenhouse gas emission reduction targets (paper IV) in the European Union (EU27) up to the year 2020. Data for the study were mainly gathered from official statistics. In all cases, the results were discussed from an environmental policy perspective. The development of nitrogen oxide (NOx) emissions was analysed in the Finnish energy sector during a long time period, 1950 2003 (paper I). Finnish emissions of NOx began to decrease in the 1980s as the progress in technology in terms of NOx/energy curbed the impact of the growth in affluence and population. Carbon dioxide (CO2) emissions related to energy use during 1993 2004 (paper II) were analysed by country and region within the European Union. Considering energy-based CO2 emissions in the European Union, dematerialization and decarbonisation did occur, but not sufficiently to offset population growth and the rapidly increasing affluence during 1993 2004. The development of nitrogen and phosphorus load from aquaculture in relation to salmonid consumption in Finland during 1980 2007 was examined, including international trade in the analysis (paper III). A regional environmental issue, eutrophication of the Baltic Sea, and a marginal, yet locally important source of nutrients was used as a case. Nutrient emissions from Finnish aquaculture decreased from the 1990s onwards: although population, affluence and salmonid consumption steadily increased, aquaculture technology improved and the relative share of imported salmonids increased. According to the sustainability challenge in industrial ecology, the environmental impact of the growing population size and affluence should be compensated by improvements in technology (emissions/service used) and with dematerialisation. In the studied cases, the emission intensity of energy production could be lowered for NOx by cleaning the exhaust gases. Reorganization of the structure of energy production as well as technological innovations will be essential in lowering the emissions of both CO2 and NOx. Regarding the intensity of energy use, making the combustion of fuels more efficient and reducing energy use are essential. In reducing nutrient emissions from Finnish aquaculture to the Baltic Sea (paper III) through technology, limits of biological and physical properties of cultured fish, among others, will eventually be faced. Regarding consumption, salmonids are preferred to many other protein sources. Regarding trade, increasing the proportion of imports will outsource the impacts. Besides improving technology and dematerialization, other viewpoints may also be needed. Reducing the total amount of nutrients cycling in energy systems and eventually contributing to NOx emissions needs to be emphasized. Considering aquaculture emissions, nutrient cycles can be partly closed through using local fish as feed replacing imported feed. In particular, the reduction of CO2 emissions in the future is a very challenging task when considering the necessary rates of dematerialisation and decarbonisation (paper II). Climate change mitigation may have to focus on other greenhouse gases than CO2 and on the potential role of biomass as a carbon sink, among others. The global population is growing and scaling up the environmental impact. Population issues and growing affluence must be considered when discussing emission reductions. Climate policy has only very recently had an influence on emissions, and strong actions are now called for climate change mitigation. Environmental policies in general must cover all the regions related to production and impacts in order to avoid outsourcing of emissions and leakage effects. The macro-level drivers affecting changes in emissions can be identified with the ImPACT framework. Statistics for generally known macro-indicators are currently relatively well available for different countries, and the method is transparent. In the papers included in this study, a similar method was successfully applied in different types of case studies. Using transparent macro-level figures and a simple top-down approach are also appropriate in evaluating and setting international emission reduction targets, as demonstrated in papers II and IV. The projected rates of population and affluence growth are especially worth consideration in setting targets. However, sensitivities in calculations must be carefully acknowledged. In the basic form of the ImPACT model, the economic intensity of consumption and emission intensity of use are included. In seeking to examine consumption but also international trade in more detail, imports were included in paper III. This example demonstrates well how outsourcing of production influences domestic emissions. Country-specific production-based emissions have often been used in similar decomposition analyses. Nevertheless, trade-related issues must not be ignored.
Resumo:
Transforming growth factor β signalling through Smad3 in allergy Allergic diseases, such as atopic dermatitis, asthma, and contact dermatitis are complex diseases influenced by both genetic and environmental factors. It is still unclear why allergy and subsequent allergic disease occur in some individuals but not in others. Transforming growth factor (TGF)-β is an important immunomodulatory and fibrogenic factor that regulates cellular processes in injured and inflamed skin. TGF-β has a significant role in the regulation of the allergen-induced immune response participating in the development of allergic and asthmatic inflammation. TGF-β is known to be an immunomodulatory factor in the progression of delayed type hypersensitivity reactions and allergic contact dermatitis. TGF-β is crucial in regulating the cellular responses involved in allergy, such as differentiation, proliferation and migration. TGF-β signals are delivered from the cytoplasm to the nucleus by TGF-β signal transducers called Smads. Smad3 is a major signal transducer in TGF-β -signalling that controls the expression of target genes in the nucleus in a cell-type specific manner. The role of TGF-β-Smad3 -signalling in the immunoregulation and pathophysiology of allergic disorders is still poorly understood. In this thesis, the role of TGF-β-Smad -signalling pathway using Smad3 -deficient knock out mice in the murine models of allergic diseases; atopic dermatitis, asthma and allergic contact reactions, was examined. Smad3-pathway regulates allergen induced skin inflammation and systemic IgE antibody production in a murine model atopic dermatitis. The defect in Smad3 -signalling decreased Th2 cytokine (IL-13 and IL-5) mRNA expression in the lung, modulated allergen induced specific IgG1 response, and affected mucus production in the lung in a murine model of asthma. TGF-β / Smad3 -signalling contributed to inflammatory hypersensitivity reactions and disease progression via modulation of chemokine and cytokine expression and inflammatory cell recruitment, cell proliferation and regulation of the specific antibody response in a murine model of contact hypersensitivity. TGF-β modulates inflammatory responses - at least partly through the Smad3 pathway - but also through other compensatory, non-Smad-dependent pathways. Understanding the effects of the TGF-β signalling pathway in the immune system and in disease models can help in elucidating the multilevel effects of TGF-β. Unravelling the mechanisms of Smad3 may open new possibilities for treating and preventing allergic responses, which may lead to severe illness and loss of work ability. In the future the Smad3 signalling pathway might be a potential target in the therapy of allergic diseases.
Resumo:
Breast cancer is the most commonly occurring cancer among women, and its incidence is increasing worldwide. Positive family history is a well established risk factor for breast cancer, and it is suggested that the proportion of breast cancer that can be attributed to genetic factors may be as high as 30%. However, all the currently known breast cancer susceptibility genes are estimated to account for 20-30% of familial breast cancer, and only 5% of the total breast cancer incidence. It is thus likely that there are still other breast cancer susceptibility genes to be found. Cellular responses to DNA damage are crucial for maintaining genomic integrity and preventing the development of cancer. The genes operating in DNA damage response signaling network are thus good candidates for breast cancer susceptibility genes. The aim of this study was to evaluate the role of three DNA damage response associated genes, ATM, RAD50, and p53, in breast cancer. ATM, a gene causative for ataxia telangiectasia (A-T), has long been a strong candidate for a breast cancer susceptibility gene because of its function as a key DNA damage signal transducer. We analyzed the prevalence of known Finnish A-T related ATM mutations in large series of familial and unselected breast cancer cases from different geographical regions in Finland. Of the seven A-T related mutations, two were observed in the studied familial breast cancer patients. Additionally, a third mutation previously associated with breast cancer susceptibility was also detected. These founder mutations may be responsible for excess familial breast cancer regionally in Northern and Central Finland, but in Southern Finland our results suggest only a minor effect, if any, of any ATM genetic variants on familial breast cancer. We also screened the entire coding region of the ATM gene in 47 familial breast cancer patients from Southern Finland, and evaluated the identified variants in additional cases and controls. All the identified variants were too rare to significantly contribute to breast cancer susceptibility. However, the role of ATM in cancer development and progression was supported by the results of the immunohistochemical studies of ATM expression, as reduced ATM expression in breast carcinomas was found to correlate with tumor differentiation and hormone receptor status. Aberrant ATM expression was also a feature shared by the BRCA1/2 and the difficult-to-treat ER/PR/ERBB2-triple-negative breast carcinomas. From the clinical point of view, identification of phenotypic and genetic similarities between the BRCA1/2 and the triple-negative breast tumors could have an implication in designing novel targeted therapies to which both of these classes of breast cancer might be exceptionally sensitive. Mutations of another plausible breast cancer susceptibility gene, RAD50, were found to be very rare, and RAD50 can only be making a minor contribution to familial breast cancer predisposition in UK and Southern Finland. The Finnish founder mutation RAD50 687delT seems to be a null allele and may carry a small increased risk of breast cancer. RAD50 is not acting as a classical tumor suppressor gene, but it is possible that RAD50 haploinsufficiency is contributing to cancer. In addition to relatively rare breast cancer susceptibility alleles, common polymorphisms may also be associated with increased breast cancer risk. Furthermore, these polymorphisms may have an impact on the progression and outcome of the disease. Our results suggest no effect of the common p53 R72P polymorphism on familial breast cancer risk or breast cancer risk in the population, but R72P seems to be associated with histopathologic features of the tumors and survival of the patients; 72P homozygous genotype was an independent prognostic factor among the unselected breast cancer patients, with a two-fold increased risk of death. These results present important novel findings also with clinical significance, as codon 72 genotype could be a useful additional prognostic marker in breast cancer, especially among the subgroup of patients with wild-type p53 in their tumors.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
This thesis consists of four research papers and an introduction providing some background. The structure in the universe is generally considered to originate from quantum fluctuations in the very early universe. The standard lore of cosmology states that the primordial perturbations are almost scale-invariant, adiabatic, and Gaussian. A snapshot of the structure from the time when the universe became transparent can be seen in the cosmic microwave background (CMB). For a long time mainly the power spectrum of the CMB temperature fluctuations has been used to obtain observational constraints, especially on deviations from scale-invariance and pure adiabacity. Non-Gaussian perturbations provide a novel and very promising way to test theoretical predictions. They probe beyond the power spectrum, or two point correlator, since non-Gaussianity involves higher order statistics. The thesis concentrates on the non-Gaussian perturbations arising in several situations involving two scalar fields, namely, hybrid inflation and various forms of preheating. First we go through some basic concepts -- such as the cosmological inflation, reheating and preheating, and the role of scalar fields during inflation -- which are necessary for the understanding of the research papers. We also review the standard linear cosmological perturbation theory. The second order perturbation theory formalism for two scalar fields is developed. We explain what is meant by non-Gaussian perturbations, and discuss some difficulties in parametrisation and observation. In particular, we concentrate on the nonlinearity parameter. The prospects of observing non-Gaussianity are briefly discussed. We apply the formalism and calculate the evolution of the second order curvature perturbation during hybrid inflation. We estimate the amount of non-Gaussianity in the model and find that there is a possibility for an observational effect. The non-Gaussianity arising in preheating is also studied. We find that the level produced by the simplest model of instant preheating is insignificant, whereas standard preheating with parametric resonance as well as tachyonic preheating are prone to easily saturate and even exceed the observational limits. We also mention other approaches to the study of primordial non-Gaussianities, which differ from the perturbation theory method chosen in the thesis work.