19 resultados para Transparent

em Helda - Digital Repository of University of Helsinki


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tutkielma käsittelee Bo Carpelanin romaania Alkutuuli (Urwind 1993) päähenkilö Daniel Urwindin fragmentaarisesti kerrottuna elämäntarinana. Kyse on kahdella aikatasolla etenevästä fiktiivisestä omaelämäkerrasta, joka perustuu viikottaisiin päiväkirjamerkintöihin. Daniel hahmottaa identiteettiään tilaan kytkeytyneiden muistojen sekä oman nimensä avulla leikaten samalla kerronnan jatkuvuuden. Tutkimusongelmana on hahmottaa Danielin kerronnan tavat, kirjoittamisen syyt sekä prosessiin lopputulos. Menneisyyteen kohdistuvan minä-kerronnan analyysi perustuu Dorrit Cohnin Transparent minds -teoksen käsitteistöön. Alkutuulessa esiintyy muistikerronnan ja muistimonologin epäsäännöllistä vuorottelua. Niiden osana on edesmenneiden läheisten upotettuja monologeja, joiden takaa kuultaa Danielin ääni: monologit ovat Danielin kuvittelemia tai referoimia puheita ja ajatuksia, joiden esittäjä on hänen yksinäisen keskustelunsa toinen osapuoli. Omaelämäkertaa käsittelevinä teoreettisina lähteinä ovat ensisijaisesti Päivi Kososen artikkelit sekä hänen tutkimuksensa Elämät sanoissa. Tutkielman tulkinnan kannalta keskeisessä asemassa ovat Bo Carpelanin essee- ja kaunokirjallinen tuotanto. Kirjoittamisen lähtökohtana on päähenkilön identiteettiä horjuttava elämänvaihe. Vaimo Marian lähdettyä vuodeksi Amerikkaan Daniel pyrkii löytämään oman kielensä ja luomaan muistojensa avulla eheämmän kuvan itsestään. Prosessiin kytkeytyy keskeisesti tila - Danielin kulkiessa talossaan, joka on hänen lapsuudenkotinsa, aistitodellisuus toimii mieleenpalauttajana: äänet ja tuoksut johdattavat hänet elämään menneisyyden muistoja uudelleen nykyhetkessä. Samalla talo ja huoneet personifioituvat ja minuus alkaa rakentua tilan kaltaiseksi. Uusien ovien ja huoneiden löytyminen symboloi Danielin muistojen selkeytymistä ja itseymmärryksen syvenemistä. Kirjoittamisen voi tulkita olevan psykoanalyyttinen prosessi, jossa omaelämäkerran minä keskustelee menneisyyden minänsä kanssa. Puhuja odottaa vastausta toiselta, vaikka vastaus löytyy paradoksaalisesti itsestä. Omaelämäkerran moniäänisyyttä korostavat edesmenneiden läheisten läsnäolo, Danielin eri ikävaiheiden identiteetit ja kaksoisolentojen hahmoissa esiintyvät minuudet. Kirjoittamisen avulla Daniel pyrkii pääsemään etäämmälle itsestään ja saavuttamaan minättömyyden tilan, jota käsitellään John Keatsilta peräisin olevan käsitteen "negatiivinen kyky" avulla. Alkutuuli on taiteilijaromaani, jossa taiteilijuutta edustavat siivet ja lentäminen kytkeytyvät mielikuvitukseen ja uusiutumiseen. Danielin tie taiteilijuutta edeltävään minättömyyteen on kuitenkin vaikea ja monet merkit viittaavat diletantin kohtaloon. Unenomaisen logiikan omaavan, fragmentaarisen kerronnan takaa on hahmotettavissa myös lainalaisuuksia. Muiden henkilöiden lyhyet elämäkerrat ovat heidän haavansa paljastavia episodeja, jotka muuttuvat osaksi Danielin elämäntarinaa. Danielin kirjallinen omakuva on montaasi, jossa eri tasossa olevat elementit, nykyhetki ja menneisyys, mielikuvitus, unet ja upotetut monologit rinnastuvat samaan tasoon. Daniel peilaa elettyä ja koettua koomisen peilirakenteen avulla, jossa hänen elämäänsä osallisena olleet menneisyyden henkilöt kulkevat narrikulkueen mukana. Toive eheästä minäkuvasta osoittautuu kuitenkin mahdottomaksi merkityssulkeuman todenvastaisuuden vuoksi. Urwind-nimen pohdinnasta liikkeelle lähtenyt omaelämäkerta päättyy syklistä liikettä mukaillen tilanteeseen, jossa päähenkilö hyväksyy elämän käsittämättömyyden ja luottaa omassa nimessään piilevään voimaan, tuuleen, viitaten samalla myös romaanin nimeen. Avainsanat: Bo Carpelan - muistot - tila - fragmentaarisuus - fiktiivinen omaelämäkerta

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis combines a computational analysis of a comprehensive corpus of Finnish lake names with a theoretical background in cognitive linguistics. The combination results on the one hand in a description of the toponymic system and the processes involved in analogy-based naming and on the other hand some adjustments to Construction Grammar. Finnish lake names are suitable for this kind of study, as they are to a large extent semantically transparent even when relatively old. There is also a large number of them, and they are comprehensively collected in a computer database. The current work starts with an exploratory computational analysis of co-location patterns between different lake names. Such an analysis makes it possible to assess the importance of analogy and patterns in naming. Prior research has suggested that analogy plays an important role, often also in cases where there are other motivations for the name, and the current study confirms this. However, it also appears that naming patterns are very fuzzy and that their nature is somewhat hard to define in an essentially structuralist tradition. In describing toponymic structure and the processes involved in naming, cognitive linguistics presents itself as a promising theoretical basis. The descriptive formalism of Construction Grammar seems especially well suited for the task. However, now productivity becomes a problem: it is not nearly as clear-cut as the latter theory often assumes, and this is even more apparent in names than in more traditional linguistic material. The varying degree of productivity is most naturally described by a prototype-based theory. Such an approach, however, requires some adjustments to onstruction Grammar. Based on all this, the thesis proposes a descriptive model where a new name -- or more generally, a new linguistic expression -- can be formed by conceptual integration from either a single prior example or a construction generalised from a number of different prior ones. The new model accounts nicely for various aspects of naming that are problematic for the traditional description based on analogy and patterns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le naturalisme finlandais. Une conception entropique du quotidien. Finnish Naturalism. An Entropic Conception of Everyday Life. Nineteenth century naturalism was a strikingly international literary movement. After emerging in France in the 1870s, it spread all over Europe including young, small nations with a relatively recent literary tradition, such as Finland. This thesis surveys the role and influence of French naturalism on the Finnish literature of the 1880s and 1890s. On the basis of a selection of works of six Finnish authors (Juhani Aho, Minna Canth, Kauppis-Heikki, Teuvo Pakkala, Ina Lange and Karl August Tavaststjerna), the study establishes a view of the main features of Finnish naturalism in comparison with that of French authors, such as Zola, Maupassant and Flaubert. The study s methodological framework is genre theory: even though naturalist writers insisted on a transparent description of reality, naturalist texts are firmly rooted in general generic categories with definable relations and constants on which European novels impose variations. By means of two key concepts, entropy and everyday life , this thesis establishes the parameters of the naturalist genre. At the heart of the naturalist novel is a movement in the direction of disintegration and confusion, from order to disorder, from illusion to disillusion. This entropic vision is merged into the representation of everyday life, focusing on socially mediocre characters and discovering their miseries in all their banality and daily grayness. By using Mikhail Bakhtin s idea of literary genres as a means of understanding experience, this thesis suggests that everyday life is an ideological core of naturalist literature that determines not only its thematic but also generic distinctions: with relation to other genres, such as to Balzac s realism, naturalism appears primarily to be a banalization of everyday life. In idyllic genres, everyday life can be represented by means of sublimation, but a naturalist novel establishes a distressing, negative everyday life and thus strives to take a critical view of the modern society. Beside the central themes, the study surveys the generic blends in naturalism. The thesis analyzes how the coalition of naturalism and the melodramatic mode in the work of Minna Canth serves naturalisms ambition to discover the unconscious instincts underlying daily realities, and how the symbolic mode in the work of Juhani Aho duplicates the semantic level of the apparently insignificant, everyday naturalist details. The study compares the naturalist novel to the ideological novel (roman à these) and surveys the central dilemma of naturalism, the confrontation between the optimistic belief in social reform and the pessimistic theory of determinism. The thesis proposes that the naturalist novel s contribution to social reform lies in its shock effect. By means of representing the unpleasant truth the entropy of everyday life it aims to scandalize the reader and make him aware of the harsh realities that might apply also to him.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this research was to examine teacher’s pedagogical thinking based on beliefs. It aimed to investigate and identify beliefs from teachers’ speech when they were reflecting their own teaching. Placement of beliefs in levels of pedagogical thinking was also examined. The second starting point for a study was the Instrumental Enrichment -intervention, which aims to enhance learning potential and cognitive functioning of students. The goal of this research was to investigate how five main principles of the intervention come forward in teachers’ thinking. Specifying research question was: how similar teachers’ beliefs are to the main principles of intervention. The teacher-thinking paradigm provided the framework for this study. The essential concepts of this study are determined exactly in the theoretical framework. Model of pedagogical thinking was important in the examination of teachers’ thinking. Beliefs were approached through the referencing of varied different theories. Feuerstein theory of Structural cognitive modifiability and Mediated learning experience completed the theory of teacher thinking. The research material was gathered in two parts. In the first part two mathematics lessons of three class teachers were videotaped. In second part the teachers were interviewed by using a stimulated recall method. Interviews were recorded and analysed by qualitative content analysis. Teachers’ beliefs were divided in themes and contents of these themes were described. This part of analysis was inductive. Second part was deductive and it was based on theories of pedagogical thinking levels and Instrumental Enrichment -intervention. According to the research results, three subcategories of teachers’ beliefs were found: beliefs about learning, beliefs about teaching and beliefs about students. When the teachers discussed learning, they emphasized the importance of understanding. In teaching related beliefs student-centrality was highlighted. The teachers also brought out some demands for good education. They were: clarity, diversity and planning. Beliefs about students were divided into two groups. The teachers believed that there are learning differences between students and that students have improved over the years. Because most of the beliefs were close to practice and related to concrete classroom situation, they were situated in Action level of pedagogical thinking. Some teaching and learning related beliefs of individual teachers were situated in Object theory level. Metatheory level beliefs were not found. Occurrence of main principles of intervention differed between teachers. They were much more consistent and transparent in the beliefs of one teacher than of the other two teachers. Differences also occurred between principles. For example reciprocity came up in every teacher’s beliefs, but modifiability was only found in the beliefs of one teacher. Results of this research were consistent with other research made in the field. Teachers’ beliefs about teaching were individual. Even though shared themes were found, the teachers emphasized different aspects of their work. Occurrence of beliefs that were in accordance with the intervention were teacher-specific. Inconsistencies were also found within teachers and their individual beliefs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parkinson’s disease (PD) is the second most common neurodegenerative disease among the elderly. Its etiology is unknown and no disease-modifying drugs are available. Thus, more information concerning its pathogenesis is needed. Among other genes, mutated PTEN-induced kinase 1 (PINK1) has been linked to early-onset and sporadic PD, but its mode of action is poorly understood. Most animal models of PD are based on the use of the neurotoxin 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP). MPTP is metabolized to MPP+ by monoamine oxidase B (MAO B) and causes cell death of dopaminergic neurons in the substantia nigra in mammals. Zebrafish has been a widely used model organism in developmental biology, but is now emerging as a model for human diseases due to its ideal combination of properties. Zebrafish are inexpensive and easy to maintain, develop rapidly, breed in large quantities producing transparent embryos, and are readily manipulated by various methods, particularly genetic ones. In addition, zebrafish are vertebrate animals and results derived from zebrafish may be more applicable to mammals than results from invertebrate genetic models such as Drosophila melanogaster and Caenorhabditis elegans. However, the similarity cannot be taken for granted. The aim of this study was to establish and test a PD model using larval zebrafish. The developing monoaminergic neuronal systems of larval zebrafish were investigated. We identified and classified 17 catecholaminergic and 9 serotonergic neuron populations in the zebrafish brain. A 3-dimensional atlas was created to facilitate future research. Only one gene encoding MAO was found in the zebrafish genome. Zebrafish MAO showed MAO A-type substrate specificity, but non-A-non-B inhibitor specificity. Distribution of MAO in larval and adult zebrafish brains was both diffuse and distinctly cellular. Inhibition of MAO during larval development led to markedly elevated 5-hydroxytryptamine (serotonin, 5-HT) levels, which decreased the locomotion of the fish. MPTP exposure caused a transient loss of cells in specific aminergic cell populations and decreased locomotion. MPTP-induced changes could be rescued by the MAO B inhibitor deprenyl, suggesting a role for MAO in MPTP toxicity. MPP+ affected only one catecholaminergic cell population; thus, the action of MPP+ was more selective than that of MPTP. The zebrafish PINK1 gene was cloned in zebrafish, and morpholino oligonucleotides were used to suppress its expression in larval zebrafish. The functional domains and expression pattern of zebrafish PINK1 resembled those of other vertebrates, suggesting that zebrafish is a feasible model for studying PINK1. Translation inhibition resulted in cell loss of the same catecholaminergic cell populations as MPTP and MPP+. Inactivation of PINK1 sensitized larval zebrafish to subefficacious doses of MPTP, causing a decrease in locomotion and cell loss in one dopaminergic cell population. Zebrafish appears to be a feasible model for studying PD, since its aminergic systems, mode of action of MPTP, and functions of PINK1 resemble those of mammalians. However, the functions of zebrafish MAO differ from the two forms of MAO found in mammals. Future studies using zebrafish PD models should utilize the advantages specific to zebrafish, such as the ability to execute large-scale genetic or drug screens.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Department of Forest Resource Management in the University of Helsinki has in years 2004?2007 carried out so-called SIMO -project to develop a new generation planning system for forest management. Project parties are organisations doing most of Finnish forest planning in government, industry and private owned forests. Aim of this study was to find out the needs and requirements for new forest planning system and to clarify how parties see targets and processes in today's forest planning. Representatives responsible for forest planning in each organisation were interviewed one by one. According to study the stand-based system for managing and treating forests continues in the future. Because of variable data acquisition methods with different accuracy and sources, and development of single tree interpretation, more and more forest data is collected without field work. The benefits of using more specific forest data also calls for use of information units smaller than tree stand. In Finland the traditional way to arrange forest planning computation is divided in two elements. After updating the forest data to present situation every stand unit's growth is simulated with different alternative treatment schedule. After simulation, optimisation selects for every stand one treatment schedule so that the management program satisfies the owner's goals in the best possible way. This arrangement will be maintained in the future system. The parties' requirements to add multi-criteria problem solving, group decision support methods as well as heuristic and spatial optimisation into system make the programming work more challenging. Generally the new system is expected to be adjustable and transparent. Strict documentation and free source code helps to bring these expectations into effect. Variable growing models and treatment schedules with different source information, accuracy, methods and the speed of processing are supposed to work easily in system. Also possibilities to calibrate models regionally and to set local parameters changing in time are required. In future the forest planning system will be integrated in comprehensive data management systems together with geographic, economic and work supervision information. This requires a modular method of implementing the system and the use of a simple data transmission interface between modules and together with other systems. No major differences in parties' view of the systems requirements were noticed in this study. Rather the interviews completed the full picture from slightly different angles. In organisation the forest management is considered quite inflexible and it only draws the strategic lines. It does not yet have a role in operative activity, although the need and benefits of team level forest planning are admitted. Demands and opportunities of variable forest data, new planning goals and development of information technology are known. Party organisations want to keep on track with development. One example is the engagement in extensive SIMO-project which connects the whole field of forest planning in Finland.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mannans are abundant plant polysaccharides found in the endosperm of certain leguminous seeds (guar gum galactomannan, GG; locust bean gum galactomannan, LBG), in the tuber of the konjac plant (konjac glucomannan, KGM), and in softwoods (galactoglucomannan, GGM). This study focused on the effects of the chemical structure of mannans on their film-forming and emulsion-stabilizing properties. Special focus was on spruce GGM, which is an interesting new product from forest biorefineries. A plasticizer was needed for the formation of films from mannans other than KGM and the optimal proportion was 40% (w/w of polymers) glycerol or sorbitol. Galactomannans with lower galactose content (LBG, modified GG) produced films with higher elongation at break and tensile strength. The mechanical properties of GG-based films were improved by decreasing the degree of polymerization of the polysaccharide with moderate mannanase treatments. The improvement of mechanical properties of GGM-based films was sought by blending GGM with each of poly(vinyl alcohol) (PVOH), corn arabinoxylan (cAX), and KGM. Adding other polymers increased the elongation at break of GGM blend films. The tensile strength of films increased with increasing amounts of PVOH and KGM, but the effect of cAX was the opposite. Dynamic mechanical analysis showed two separate loss modulus peaks for blends of GGM and PVOH, but a single peak for all other films. Optical and scanning electron microscopy confirmed good miscibility of GGM with cAX and KGM. In contrast, films blended from GGM and PVOH showed phase separation. GGM and KGM were mixed with cellulose nanowhiskers (CNW) to form composite films. Addition of CNW to KGM-based films induced the formation of fiberlike structures with lengths of several millimeters. In GGM-based films, rodlike structures with lengths of tens of micrometers were formed. Interestingly, the notable differences in the film structure did not appear to be related to the mechanical and thermal properties of the films. Permeability properties of GGM-based films were compared to those of films from commercial mannans KGM, GG, and LBG. GGM-based films had the lowest water vapor permeability when compared to films from other mannans. The oxygen permeability of GGM films was of the same magnitude as that of commercial polyethylene / ethylene vinyl alcohol / polyethylene laminate film. The aroma permeability of GGM films was low. All films were transparent in the visible region, but GGM films blocked the light transmission in the ultraviolet region of the spectra. The stabilizing effect of GGM on a model beverage emulsion system was studied and compared to that of GG, LBG, KGM, and cAX. In addition, GG was enzymatically modified in order to examine the effect of the degree of polymerization and the degree of substitution of galactomannans on emulsion stability. Use of GGM increased the turbidity of emulsions both immediately after preparation and after storage of up to 14 days at room temperature. GGM emulsions had higher turbidity than the emulsions containing other mannans. Increasing the storage temperature to +45 ºC led to rapid emulsion breakdown, but a decrease in storage temperature increased emulsion stability after 14 days. A low degree of polymerization and a high degree of substitution of the modified galactomannans were associated with a decrease in emulsion turbidity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Photocatalytic TiO2 thin films can be highly useful in many environments and applications. They can be used as self-cleaning coatings on top of glass, tiles and steel to reduce the amount of fouling on these surfaces. Photocatalytic TiO2 surfaces have antimicrobial properties making them potentially useful in hospitals, bathrooms and many other places where microbes may cause problems. TiO2 photocatalysts can also be used to clean contaminated water and air. Photocatalytic oxidation and reduction reactions proceed on TiO2 surfaces under irradiation of UV light meaning that sunlight and even normal indoor lighting can be utilized. In order to improve the photocatalytic properties of TiO2 materials even further, various modification methods have been explored. Doping with elements such as nitrogen, sulfur and fluorine, and preparation of different kinds of composites are typical approaches that have been employed. Photocatalytic TiO2 nanotubes and other nanostructures are gaining interest as well. Atomic Layer Deposition (ALD) is a chemical gas phase thin film deposition method with strong roots in Finland. This unique modification of the common Chemical Vapor Deposition (CVD) method is based on alternate supply of precursor vapors to the substrate which forces the film growth reactions to proceed only on the surface in a highly controlled manner. ALD gives easy and accurate film thickness control, excellent large area uniformity and unparalleled conformality on complex shaped substrates. These characteristics have recently led to several breakthroughs in microelectronics, nanotechnology and many other areas. In this work, the utilization of ALD to prepare photocatalytic TiO2 thin films was studied in detail. Undoped as well as nitrogen, sulfur and fluorine doped TiO2 thin films were prepared and thoroughly characterized. ALD prepared undoped TiO2 films were shown to exhibit good photocatalytic activities. Of the studied dopants, sulfur and fluorine were identified as much better choices than nitrogen. Nanostructured TiO2 photocatalysts were prepared through template directed deposition on various complex shaped substrates by exploiting the good qualities of ALD. A clear enhancement in the photocatalytic activity was achieved with these nanostructures. Several new ALD processes were also developed in this work. TiO2 processes based on two new titanium precursors, Ti(OMe)4 and TiF4, were shown to exhibit saturative ALD-type of growth when water was used as the other precursor. In addition, TiS2 thin films were prepared for the first time by ALD using TiCl4 and H2S as precursors. Ti1-xNbxOy and Ti1-xTaxOy transparent conducting oxide films were prepared successfully by ALD and post-deposition annealing. Highly unusual, explosive crystallization behaviour occurred in these mixed oxides which resulted in anatase crystals with lateral dimensions over 1000 times the film thickness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Industrial ecology is an important field of sustainability science. It can be applied to study environmental problems in a policy relevant manner. Industrial ecology uses ecosystem analogy; it aims at closing the loop of materials and substances and at the same time reducing resource consumption and environmental emissions. Emissions from human activities are related to human interference in material cycles. Carbon (C), nitrogen (N) and phosphorus (P) are essential elements for all living organisms, but in excess have negative environmental impacts, such as climate change (CO2, CH4 N2O), acidification (NOx) and eutrophication (N, P). Several indirect macro-level drivers affect emissions change. Population and affluence (GDP/capita) often act as upward drivers for emissions. Technology, as emissions per service used, and consumption, as economic intensity of use, may act as drivers resulting in a reduction in emissions. In addition, the development of country-specific emissions is affected by international trade. The aim of this study was to analyse changes in emissions as affected by macro-level drivers in different European case studies. ImPACT decomposition analysis (IPAT identity) was applied as a method in papers I III. The macro-level perspective was applied to evaluate CO2 emission reduction targets (paper II) and the sharing of greenhouse gas emission reduction targets (paper IV) in the European Union (EU27) up to the year 2020. Data for the study were mainly gathered from official statistics. In all cases, the results were discussed from an environmental policy perspective. The development of nitrogen oxide (NOx) emissions was analysed in the Finnish energy sector during a long time period, 1950 2003 (paper I). Finnish emissions of NOx began to decrease in the 1980s as the progress in technology in terms of NOx/energy curbed the impact of the growth in affluence and population. Carbon dioxide (CO2) emissions related to energy use during 1993 2004 (paper II) were analysed by country and region within the European Union. Considering energy-based CO2 emissions in the European Union, dematerialization and decarbonisation did occur, but not sufficiently to offset population growth and the rapidly increasing affluence during 1993 2004. The development of nitrogen and phosphorus load from aquaculture in relation to salmonid consumption in Finland during 1980 2007 was examined, including international trade in the analysis (paper III). A regional environmental issue, eutrophication of the Baltic Sea, and a marginal, yet locally important source of nutrients was used as a case. Nutrient emissions from Finnish aquaculture decreased from the 1990s onwards: although population, affluence and salmonid consumption steadily increased, aquaculture technology improved and the relative share of imported salmonids increased. According to the sustainability challenge in industrial ecology, the environmental impact of the growing population size and affluence should be compensated by improvements in technology (emissions/service used) and with dematerialisation. In the studied cases, the emission intensity of energy production could be lowered for NOx by cleaning the exhaust gases. Reorganization of the structure of energy production as well as technological innovations will be essential in lowering the emissions of both CO2 and NOx. Regarding the intensity of energy use, making the combustion of fuels more efficient and reducing energy use are essential. In reducing nutrient emissions from Finnish aquaculture to the Baltic Sea (paper III) through technology, limits of biological and physical properties of cultured fish, among others, will eventually be faced. Regarding consumption, salmonids are preferred to many other protein sources. Regarding trade, increasing the proportion of imports will outsource the impacts. Besides improving technology and dematerialization, other viewpoints may also be needed. Reducing the total amount of nutrients cycling in energy systems and eventually contributing to NOx emissions needs to be emphasized. Considering aquaculture emissions, nutrient cycles can be partly closed through using local fish as feed replacing imported feed. In particular, the reduction of CO2 emissions in the future is a very challenging task when considering the necessary rates of dematerialisation and decarbonisation (paper II). Climate change mitigation may have to focus on other greenhouse gases than CO2 and on the potential role of biomass as a carbon sink, among others. The global population is growing and scaling up the environmental impact. Population issues and growing affluence must be considered when discussing emission reductions. Climate policy has only very recently had an influence on emissions, and strong actions are now called for climate change mitigation. Environmental policies in general must cover all the regions related to production and impacts in order to avoid outsourcing of emissions and leakage effects. The macro-level drivers affecting changes in emissions can be identified with the ImPACT framework. Statistics for generally known macro-indicators are currently relatively well available for different countries, and the method is transparent. In the papers included in this study, a similar method was successfully applied in different types of case studies. Using transparent macro-level figures and a simple top-down approach are also appropriate in evaluating and setting international emission reduction targets, as demonstrated in papers II and IV. The projected rates of population and affluence growth are especially worth consideration in setting targets. However, sensitivities in calculations must be carefully acknowledged. In the basic form of the ImPACT model, the economic intensity of consumption and emission intensity of use are included. In seeking to examine consumption but also international trade in more detail, imports were included in paper III. This example demonstrates well how outsourcing of production influences domestic emissions. Country-specific production-based emissions have often been used in similar decomposition analyses. Nevertheless, trade-related issues must not be ignored.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of four research papers and an introduction providing some background. The structure in the universe is generally considered to originate from quantum fluctuations in the very early universe. The standard lore of cosmology states that the primordial perturbations are almost scale-invariant, adiabatic, and Gaussian. A snapshot of the structure from the time when the universe became transparent can be seen in the cosmic microwave background (CMB). For a long time mainly the power spectrum of the CMB temperature fluctuations has been used to obtain observational constraints, especially on deviations from scale-invariance and pure adiabacity. Non-Gaussian perturbations provide a novel and very promising way to test theoretical predictions. They probe beyond the power spectrum, or two point correlator, since non-Gaussianity involves higher order statistics. The thesis concentrates on the non-Gaussian perturbations arising in several situations involving two scalar fields, namely, hybrid inflation and various forms of preheating. First we go through some basic concepts -- such as the cosmological inflation, reheating and preheating, and the role of scalar fields during inflation -- which are necessary for the understanding of the research papers. We also review the standard linear cosmological perturbation theory. The second order perturbation theory formalism for two scalar fields is developed. We explain what is meant by non-Gaussian perturbations, and discuss some difficulties in parametrisation and observation. In particular, we concentrate on the nonlinearity parameter. The prospects of observing non-Gaussianity are briefly discussed. We apply the formalism and calculate the evolution of the second order curvature perturbation during hybrid inflation. We estimate the amount of non-Gaussianity in the model and find that there is a possibility for an observational effect. The non-Gaussianity arising in preheating is also studied. We find that the level produced by the simplest model of instant preheating is insignificant, whereas standard preheating with parametric resonance as well as tachyonic preheating are prone to easily saturate and even exceed the observational limits. We also mention other approaches to the study of primordial non-Gaussianities, which differ from the perturbation theory method chosen in the thesis work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This master s thesis examines tourism related housing and related discourses in the village of Kilpisjärvi, Finland. I study the tourism development in Kilpisjärvi and the debate related to this process. My methodology is based on discourse and content analysis. The purpose of this study is to examine and classify the discourses of tourism related housing and what are the lessons learned from the recent development of Kilpisjärvi. Kilpisjärvi is the northernmost village in western Finnish Lapland, located in the middle of the highest mountain area of Finland. The area has been reindeer herding area of Saami people for centuries, but it has lacked permanent settlement until the beginning of 20th century. The first tourist accommodation was built in 1930s, followed by the road in 1940s and the hotel in 1950s. Traditionally the area has attracted skiers and hikers. The area is also known for its extraordinary nature and rare plant life. Tourism development was slow in Kilpisjärvi until the turn of millennium when rapid growth in tourism related housing was triggered by extensive land use planning. Small wilderness village of Kilpisjärvi has grown to a tourism centre with over 800 beds in commercial enterprises, more than hundred second-homes, and two large caravan areas. This development has raised conflicts among villagers. The empirical part of this study is based on the interviews of 17 permanent dwellers of Kilpisjärvi and three Norwegian cottage owners. Six discourses can be distinguished: 1) Nature and landscape, 2) Economy, 3) Place, 4)Reindeer herding, 5) Governance and 6) Possibilities to influence decision-making. The first discourse stressed that tourism development and building should adapt to nature and landscape, while economic discourse stressed the economical importance of tourism to Kilpisjärvi and the municipality of Enontekiö. The third discourse noted the change of Kilpisjärvi as a place due to the boom of tourism development. The discourse of reindeer herding was clearly distinguished from others, seeing tourism development merely negative. Governance was seen as an important tool in regulating development, but many saw that the municipal administration has failed to take into account other aspects of tourism development than economical factors. Many villagers saw their influence in decision-making weak, while landowners and municipal decision-makers were seen as oligarchy in land-use planning regardless of formal participatory planning process enforced by law. I conclude that it is important to take into account the diversity of local discourses in tourism development and land use issues. Transparent and genuine participatory planning process would promote sustainable development, prevent conflicts and allow decisions and development which would satisfy larger number of local dwellers than presently.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract (The socio-onomastic approach and translation): The article adopts an onomastic perspective on the translation, and highlights the challenges posed by the given names. The newer socio-onomastic research has drawn attention to the emotive, appealing, ideological and integrative functions of the names, showing strong links with both the period and with society. In the article this is exemplified with ship names from the nineteenth century, which partly reflect classicism (Argo, Hercules, Juno, Neptunus) and national romanticism (Aallotar, Aino, Sampo, Wellamo). A special challenge is posed by the transparent names that evoke the actual words used, such as Penningdraken ('Money Dragon'), a ship that brought big money, and Människoätaren ('The man killer'), a ship where many sailors lost their lives. Names raise time-bound and culture-bound associations and the translator should be able to interpret the names as an embodiment of the society and the culture from which they originate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we deal with the concept of risk. The objective is to bring together and conclude on some normative information regarding quantitative portfolio management and risk assessment. The first essay concentrates on return dependency. We propose an algorithm for classifying markets into rising and falling. Given the algorithm, we derive a statistic: the Trend Switch Probability, for detection of long-term return dependency in the first moment. The empirical results suggest that the Trend Switch Probability is robust over various volatility specifications. The serial dependency in bear and bull markets behaves however differently. It is strongly positive in rising market whereas in bear markets it is closer to a random walk. Realized volatility, a technique for estimating volatility from high frequency data, is investigated in essays two and three. In the second essay we find, when measuring realized variance on a set of German stocks, that the second moment dependency structure is highly unstable and changes randomly. Results also suggest that volatility is non-stationary from time to time. In the third essay we examine the impact from market microstructure on the error between estimated realized volatility and the volatility of the underlying process. With simulation-based techniques we show that autocorrelation in returns leads to biased variance estimates and that lower sampling frequency and non-constant volatility increases the error variation between the estimated variance and the variance of the underlying process. From these essays we can conclude that volatility is not easily estimated, even from high frequency data. It is neither very well behaved in terms of stability nor dependency over time. Based on these observations, we would recommend the use of simple, transparent methods that are likely to be more robust over differing volatility regimes than models with a complex parameter universe. In analyzing long-term return dependency in the first moment we find that the Trend Switch Probability is a robust estimator. This is an interesting area for further research, with important implications for active asset allocation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A growing body of empirical research examines the structure and effectiveness of corporate governance systems around the world. An important insight from this literature is that corporate governance mechanisms address the excessive use of managerial discretionary powers to get private benefits by expropriating the value of shareholders. One possible way of expropriation is to reduce the quality of disclosed earnings by manipulating the financial statements. This lower quality of earnings should then be reflected by the stock price of firm according to value relevance theorem. Hence, instead of testing the direct effect of corporate governance on the firm’s market value, it is important to understand the causes of the lower quality of accounting earnings. This thesis contributes to the literature by increasing knowledge about the extent of the earnings management – measured as the extent of discretionary accruals in total disclosed earnings - and its determinants across the Transitional European countries. The thesis comprises of three essays of empirical analysis of which first two utilize the data of Russian listed firms whereas the third essay uses data from 10 European economies. More specifically, the first essay adds to existing research connecting earnings management to corporate governance. It testifies the impact of the Russian corporate governance reforms of 2002 on the quality of disclosed earnings in all publicly listed firms. This essay provides empirical evidence of the fact that the desired impact of reforms is not fully substantiated in Russia without proper enforcement. Instead, firm-level factors such as long-term capital investments and compliance with International financial reporting standards (IFRS) determine the quality of the earnings. The result presented in the essay support the notion proposed by Leuz et al. (2003) that the reforms aimed to bring transparency do not correspond to desired results in economies where investor protection is lower and legal enforcement is weak. The second essay focuses on the relationship between the internal-control mechanism such as the types and levels of ownership and the quality of disclosed earnings in Russia. The empirical analysis shows that the controlling shareholders in Russia use their powers to manipulate the reported performance in order to get private benefits of control. Comparatively, firms owned by the State have significantly better quality of disclosed earnings than other controllers such as oligarchs and foreign corporations. Interestingly, market performance of firms controlled by either State or oligarchs is better than widely held firms. The third essay provides useful evidence on the fact that both ownership structures and economic characteristics are important factors in determining the quality of disclosed earnings in three groups of countries in Europe. Evidence suggests that ownership structure is a more important determinant in developed and transparent countries, while economic determinants are important determinants in developing and transitional countries.