14 resultados para Endogenous Information Structure
em Helda - Digital Repository of University of Helsinki
Resumo:
The information that the economic agents have and regard relevant to their decision making is often assumed to be exogenous in economics. It is assumed that the agents either poses or can observe the payoff relevant information without having to exert any effort to acquire it. In this thesis we relax the assumption of ex-ante fixed information structure and study what happens to the equilibrium behavior when the agents must also decide what information to acquire and when to acquire it. This thesis addresses this question in the two essays on herding and two essays on auction theory. In the first two essays, that are joint work with Klaus Kultti, we study herding models where it is costly to acquire information on the actions that the preceding agents have taken. In our model the agents have to decide both the action that they take and additionally the information that they want to acquire by observing their predecessors. We characterize the equilibrium behavior when the decision to observe preceding agents' actions is endogenous and show how the equilibrium outcome may differ from the standard model, where all preceding agents actions are assumed to be observable. In the latter part of this thesis we study two dynamic auctions: the English and the Dutch auction. We consider a situation where bidder(s) are uninformed about their valuations for the object that is put up for sale and they may acquire this information for a small cost at any point during the auction. We study the case of independent private valuations. In the third essay of the thesis we characterize the equilibrium behavior in an English auction when there are informed and uninformed bidders. We show that the informed bidder may jump bid and signal to the uninformed that he has a high valuation, thus deterring the uninformed from acquiring information and staying in the auction. The uninformed optimally acquires information once the price has passed a particular threshold and the informed has not signalled that his valuation is high. In addition, we provide an example of an information structure where the informed bidder initially waits and then makes multiple jumps. In the fourth essay of this thesis we study the Dutch auction. We consider two cases where all bidders are all initially uninformed. In the first case the information acquisition cost is the same across all bidders and in the second also the cost of information acquisition is independently distributed and private information to the bidders. We characterize a mixed strategy equilibrium in the first and a pure strategy equilibrium in the second case. In addition we provide a conjecture of an equilibrium in an asymmetric situation where there is one informed and one uninformed bidder. We compare the revenues that the first price auction and the Dutch auction generate and we find that under some circumstances the Dutch auction outperforms the first price sealed bid auction. The usual first price sealed bid auction and the Dutch auction are strategically equivalent. However, this equivalence breaks down in case information is acquired during the auction.
Resumo:
Information structure and Kabyle constructions Three sentence types in the Construction Grammar framework The study examines three Kabyle sentence types and their variants. These sentence types have been chosen because they code the same state of affairs but have different syntactic structures. The sentence types are Dislocated sentence, Cleft sentence, and Canonical sentence. I argue first that a proper description of these sentence types should include information structure and, second, that a description which takes into account information structure is possible in the Construction Grammar framework. The study thus constitutes a testing ground for Construction Grammar for its applicability to a less known language. It constitutes a testing ground notably because the differentiation between the three types of sentences cannot be done without information structure categories and, consequently, these categories must be integrated also in the grammatical description. The information structure analysis is based on the model outlined by Knud Lambrecht. In that model, information structure is considered as a component of sentence grammar that assures the pragmatically correct sentence forms. The work starts by an examination of the three sentence types and the analyses that have been done in André Martinet s functional grammar framework. This introduces the sentence types chosen as the object of study and discusses the difficulties related to their analysis. After a presentation of the state of the art, including earlier and more recent models, the principles and notions of Construction Grammar and of Lambrecht s model are introduced and explicated. The information structure analysis is presented in three chapters, each treating one of the three sentence types. The analyses are based on spoken language data and elicitation. Prosody is included in the study when a syntactic structure seems to code two different focus structures. In such cases, it is pertinent to investigate whether these are coded by prosody. The final chapter presents the constructions that have been established and the problems encountered in analysing them. It also discusses the impact of the study on the theories used and on the theory of syntax in general.
Resumo:
We demonstrate how endogenous information acquisition in credit markets creates lending cycles when competing banks undertake their screening decisions in an uncoordinated way, thereby highlighting the role of intertemporal screening externalities induced by lending market competition as a structural source of instability. We show that uncoordinated screening behavior of competing banks may be not only the source of an important financial multiplier, but also an independent source of fluctuations inducing business cycles. The screening cycle mechanism is robust to generalizations along many dimensions such as the lending market structure, the lending rate determination and the imperfections in the screening technology.
Resumo:
National anniversaries such as independence days demand precise coordination in order to make citizens change their routines to forego work and spend the day at rest or at festivities that provide social focus and spectacle. The complex social construction of national days is taken for granted and operates as a given in the news media, which are the main agents responsible for coordinating these planned disruptions of normal routines. This study examines the language used in the news to construct the rather unnatural idea of national days and to align people in observing them. The data for the study consist of news stories about the Fourth of July in the New York Times, sampled over 150 years and are supplemented by material from other sources and other countries. The study is multidimensional, applying concepts from pragmatics (speech acts, politeness, information structure), systemic functional linguistics (the interpersonal metafunction and the Appraisal framework) and cognitive linguistics (frames, metaphor) as well as journalism and communications to arrive at an interdisciplinary understanding of how resources for meaning are used by writers and readers of the news stories. The analysis shows that on national anniversaries, nations tend to be metaphorized as persons having birthdays, to whom politeness should be shown. The face of the nation is to be respected in the sense of identifying the nation's interests as one's own (positive face) and speaking of citizen responsibilities rather than rights (negative face). Resources are available for both positive and negative evaluations of events and participants and the newspaper deftly changes footings (Goffman 1981) to demonstrate the required politeness while also heteroglossically allowing for a certain amount of disattention and even protest - within limits, for state holidays are almost never construed as Bakhtinian festivals, as they tend to reaffirm the hierarchy rather than invert it. Celebrations are evaluated mainly for impressiveness, and for the essentially contested quality of appropriateness, which covers norms of predictability, size, audience response, aesthetics, and explicit reference to the past. Events may also be negatively evaluated as dull ("banal") or inauthentic ("hoopla"). Audiences are evaluated chiefly in terms of their enthusiasm, or production of appropriate displays for emotional response, for national days are supposed to be occasions of flooding-out of nationalistic feeling. By making these evaluations, the newspaper reinforces its powerful position as an independent critic, while at the same time playing an active role in the construction and reproduction of emotional order embodied in "the nation's birthday." As an occasion for mobilization and demonstrations of power, national days may be seen to stand to war in the relation of play to fighting (Bateson 1955). Evidence from the newspaper's coverage of recent conflicts is adduced to support this analysis. In the course of the investigation, methods are developed for analyzing large collections of newspaper content, particularly topical soft news and feature materials that have hitherto been considered less influential and worthy of study than so-called hard news. In his work on evaluation in newspaper stories, White (1998) proposed that the classic hard news story is focused on an event that threatens the social order, but news of holidays and celebrations in general does not fit this pattern, in fact its central event is a reproduction of the social order. Thus in the system of news values (Galtung and Ruge 1965), national holiday news draws on "ground" news values such as continuity and predictability rather than "figure" news values such as negativity and surprise. It is argued that this ground helps form a necessary space for hard news to be seen as important, similar to the way in which the information structure of language is seen to rely on the regular alternation of given and new information (Chafe 1994).
Resumo:
This study analyses the diction of Latin building inscriptions. Despite its importance, this topic has rarely been discussed before: the most substantial contribution on the subject is a short dissertation by Klaus Gast (1965) that focuses on 100 inscriptions dating mostly from the Republican period. Marietta Horster (2001) also touched upon this theme in her thesis on imperial building inscriptions. I have collected my source material in North Africa because more Latin building inscriptions dating from the Imperial period have survived there than in any other area of the Roman Empire. By means of a thorough and independent survey, I have assembled all relevant African Latin building inscriptions datable to the Roman period (between 146 BC and AD 425), 1002 texts, into a corpus. These inscriptions are all fully edited in Appendix 1; Appendix 2 contains references to earlier editions. To facilitate search operations, both are also available in electronic form. They are downloadable from the address http://www.helsinki.fi/hum/kla/htm/jatkoopinnot.htm. Chapter one is an introduction dealing with the nature of building inscriptions as source material. Chapter two offers a statistical overview of the material. The following main section of the work falls into five chapters, each of which analyses one main part of a building inscription. An average building inscription can be divided into five parts: the starting phrase opens the inscription (a dedication to gods, for example), the subject part identifies the builder, the object part describes the constructed or repaired building, the predicate part records the building activity and the supplement part offers additional information on the project (it can specify the funding, for instance). These chapters are systematic and chronological and their purpose is to register and interpret the phrases used, to analyse reasons for their use and for their popularity among the different groups of builders. Chapter eight, which follows the main section of the work, creates a typology of building inscriptions based on their structure. It also presents the most frequently attested types of building inscriptions. The conclusion describes, on a general level, how the diction of building inscriptions developed during the period of study and how this striking development resulted from socio-economic changes that took place in Romano-African society during Antiquity. This study shows that the phraseology of building inscriptions had a clear correlation both with the type of builder and with the date of carving. Private builders tended to accentuate their participation (especially its financial side) in the project; honouring the emperor received more emphasis in the building inscriptions set up by communities; the texts produced by the army were concise. The chronological development is so clear that it enables stylistic dating. At the beginning of the imperial period the phrases were clear, concrete, formal and stereotyped but by Late Antiquity they have become vague, subjective, flexible, varied and even rhetorically or poetically coloured.
Resumo:
An important challenge in forest industry is to get the appropriate raw material out from the forests to the wood processing industry. Growth and stem reconstruction simulators are therefore increasingly integrated in industrial conversion simulators, for linking the properties of wooden products to the three-dimensional structure of stems and their growing conditions. Static simulators predict the wood properties from stem dimensions at the end of a growth simulation period, whereas in dynamic approaches, the structural components, e.g. branches, are incremented along with the growth processes. The dynamic approach can be applied to stem reconstruction by predicting the three-dimensional stem structure from external tree variables (i.e. age, height) as a result of growth to the current state. In this study, a dynamic growth simulator, PipeQual, and a stem reconstruction simulator, RetroSTEM, are adapted to Norway spruce (Picea abies [L.] Karst.) to predict the three-dimensional structure of stems (tapers, branchiness, wood basic density) over time such that both simulators can be integrated in a sawing simulator. The parameterisation of the PipeQual and RetroSTEM simulators for Norway spruce relied on the theoretically based description of tree structure developing in the growth process and following certain conservative structural regularities while allowing for plasticity in the crown development. The crown expressed both regularity and plasticity in its development, as the vertical foliage density peaked regularly at about 5 m from the stem apex, varying below that with tree age and dominance position (Study I). Conservative stem structure was characterized in terms of (1) the pipe ratios between foliage mass and branch and stem cross-sectional areas at crown base, (2) the allometric relationship between foliage mass and crown length, (3) mean branch length relative to crown length and (4) form coefficients in branches and stem (Study II). The pipe ratio between branch and stem cross-sectional area at crown base, and mean branch length relative to the crown length may differ in trees before and after canopy closure, but the variation should be further analysed in stands of different ages and densities with varying site fertilities and climates. The predictions of the PipeQual and RetroSTEM simulators were evaluated by comparing the simulated values to measured ones (Study III, IV). Both simulators predicted stem taper and branch diameter at the individual tree level with a small bias. RetroSTEM predictions of wood density were accurate. For focusing on even more accurate predictions of stem diameters and branchiness along the stem, both simulators should be further improved by revising the following aspects in the simulators: the relationship between foliage and stem sapwood area in the upper stem, the error source in branch sizes, the crown base development and the height growth models in RetroSTEM. In Study V, the RetroSTEM simulator was integrated in the InnoSIM sawing simulator, and according to the pilot simulations, this turned out to be an efficient tool for readily producing stand scale information about stem sizes and structure when approximating the available assortments of wood products.
Resumo:
The purpose of this study is to describe the development of application of mass spectrometry for the structural analyses of non-coding ribonucleic acids during past decade. Mass spectrometric methods are compared of traditional gel electrophoretic methods, the characteristics of performance of mass spectrometric, analyses are studied and the future trends of mass spectrometry of ribonucleic acids are discussed. Non-coding ribonucleic acids are short polymeric biomolecules which are not translated to proteins, but which may affect the gene expression in all organisms. Regulatory ribonucleic acids act through transient interactions with key molecules in signal transduction pathways. Interactions are mediated through specific secondary and tertiary structures. Posttranscriptional modifications in the structures of molecules may introduce new properties to the organism, such as adaptation to environmental changes or development of resistance to antibiotics. In the scope of this study, the structural studies include i) determination of the sequence of nucleobases in the polymer chain, ii) characterisation and localisation of posttranscriptional modifications in nucleobases and in the backbone structure, iii) identification of ribonucleic acid-binding molecules and iv) probing of higher order structures in the ribonucleic acid molecule. Bacteria, archaea, viruses and HeLa cancer cells have been used as target organisms. Synthesised ribonucleic acids consisting of structural regions of interest have been frequently used. Electrospray ionisation (ESI) and matrix-assisted laser desorption ionisation (MALDI) have been used for ionisation of ribonucleic analytes. Ammonium acetate and 2-propanol are common solvents for ESI. Trihydroxyacetophenone is the optimal MALDI matrix for ionisation of ribonucleic acids and peptides. Ammonium salts are used in ESI buffers and MALDI matrices as additives to remove cation adducts. Reverse phase high performance liquid chromatography has been used for desalting and fractionation of analytes either off-line of on-line, coupled with ESI source. Triethylamine and triethylammonium bicarbonate are used as ion pair reagents almost exclusively. Fourier transform ion cyclotron resonance analyser using ESI coupled with liquid chromatography is the platform of choice for all forms of structural analyses. Time-of-flight (TOF) analyser using MALDI may offer sensitive, easy-to-use and economical solution for simple sequencing of longer oligonucleotides and analyses of analyte mixtures without prior fractionation. Special analysis software is used for computer-aided interpretation of mass spectra. With mass spectrometry, sequences of 20-30 nucleotides of length may be determined unambiguously. Sequencing may be applied to quality control of short synthetic oligomers for analytical purposes. Sequencing in conjunction with other structural studies enables accurate localisation and characterisation of posttranscriptional modifications and identification of nucleobases and amino acids at the sites of interaction. High throughput screening methods for RNA-binding ligands have been developed. Probing of the higher order structures has provided supportive data for computer-generated three dimensional models of viral pseudoknots. In conclusion. mass spectrometric methods are well suited for structural analyses of small species of ribonucleic acids, such as short non-coding ribonucleic acids in the molecular size region of 20-30 nucleotides. Structural information not attainable with other methods of analyses, such as nuclear magnetic resonance and X-ray crystallography, may be obtained with the use of mass spectrometry. Sequencing may be applied to quality control of short synthetic oligomers for analytical purposes. Ligand screening may be used in the search of possible new therapeutic agents. Demanding assay design and challenging interpretation of data requires multidisclipinary knowledge. The implement of mass spectrometry to structural studies of ribonucleic acids is probably most efficiently conducted in specialist groups consisting of researchers from various fields of science.
Resumo:
Soy-derived phytoestrogen genistein and 17β-estradiol (E2), the principal endogenous estrogen in women, are also potent antioxidants protecting LDL and HDL lipoproteins against oxidation. This protection is enhanced by esterification with fatty acids, resulting in lipophilic molecules that accumulate in lipoproteins or fatty tissues. The aims were to investigate, whether genistein becomes esterified with fatty acids in human plasma accumulating in lipoproteins, and to develop a method for their quantitation; to study the antioxidant activity of different natural and synthetic estrogens in LDL and HDL; and to determine the E2 esters in visceral and subcutaneous fat in late pregnancy and in pre- and postmenopause. Human plasma was incubated with [3H]genistein and its esters were analyzed from lipoprotein fractions. Time-resolved fluoroimmunoassay (TR-FIA) was used to quantitate genistein esters in monkey plasma after subcutaneous and oral administration. The E2 esters in women s serum and adipose tissue were also quantitated using TR-FIA. The antioxidant activity of estrogen derivatives (n=43) on LDL and HDL was assessed by monitoring the copper induced formation of conjugated dienes. Human plasma was shown to produce lipoprotein-bound genistein fatty acid esters, providing a possible explanation for the previously reported increased oxidation resistance of LDL particles during intake of soybean phytoestrogens. Genistein esters were introduced into blood by subcutaneous administration. The antioxidant effect of estrogens on lipoproteins is highly structure-dependent. LDL and HDL were protected against oxidation by many unesterified, yet lipophilic derivatives. The strongest antioxidants had an unsubstituted A-ring phenolic hydroxyl group with one or two adjacent methoxy groups. E2 ester levels were high during late pregnancy. The median concentration of E2 esters in pregnancy serum was 0.42 nmol/l (n=13) and in pre- (n=8) and postmenopause (n=6) 0.07 and 0.06 nmol/l, respectively. In pregnancy visceral fat the concentration of E2 esters was 4.24 nmol/l and in pre- and postmenopause 0.82 and 0.74 nmol/l. The results from subcutaneous fat were similar. In serum and fat during pregnancy, E2 esters constituted about 0.5 and 10% of the free E2. In non-pregnant women most of the E2 in fat was esterified (the ester/free ratio 150 - 490%). In postmenopause, E2 levels in fat highly exceeded those in serum, the majority being esterified. The pathways for fatty acid esterification of steroid hormones are found in organisms ranging from invertebrates to vertebrates. The evolutionary preservation and relative abundance of E2 esters, especially in fat tissue, suggest a biological function, most likely in providing a readily available source of E2. The body s own estrogen reservoir could be used as a source of E2 by pharmacologically regulating the E2 esterification or hydrolysis.
Resumo:
This thesis presents a novel application of x-ray Compton scattering to structural studies of molecular liquids. Systematic Compton-scattering experiments on water have been carried out with unprecedented accuracy at third-generation synchrotron-radiation laboratories. The experiments focused on temperature effects in water, the water-to-ice phase transition, quantum isotope effects, and ion hydration. The experimental data is interpreted by comparison with both model computations and ab initio molecular-dynamics simulations. Accordingly, Compton scattering is found to provide unique intra- and intermolecular structural information. This thesis thus demonstrates the complementarity of the technique to traditional real-space probes for studies on the local structure of water and, more generally, molecular liquids.
Resumo:
This master thesis studies how trade liberalization affects the firm-level productivity and industrial evolution. To do so, I built a dynamic model that considers firm-level productivity as endogenous to investigate the influence of trade on firm’s productivity and the market structure. In the framework, heterogeneous firms in the same industry operate differently in equilibrium. Specifically, firms are ex ante identical but heterogeneity arises as an equilibrium outcome. Under the setting of monopolistic competition, this type of model yields an industry that is represented not by a steady-state outcome, but by an evolution that rely on the decisions made by individual firms. I prove that trade liberalization has a general positive impact on technological adoption rates and hence increases the firm-level productivity. Besides, this endogenous technology adoption model also captures the stylized facts: exporting firms are larger and more productive than their non-exporting counterparts in the same sector. I assume that the number of firms is endogenous, since, according to the empirical literature, the industrial evolution shows considerably different patterns across countries; some industries experience large scale of firms’ exit in the period of contracting market shares, while some industries display relative stable number of firms or gradually increase quantities. The special word “shakeout” is used to describe the dramatic decrease in the number of firms. In order to explain the causes of shakeout, I construct a model where forward-looking firms decide to enter and exit the market on the basis of their state of technology. In equilibrium, firms choose different dates to adopt innovation which generate a gradual diffusion process. It is exactly this gradual diffusion process that generates the rapid, large-scale exit phenomenon. Specifically, it demonstrates that there is a positive feedback between firm’s exit and adoption, the reduction in the number of firms increases the incentives for remaining firms to adopt innovation. Therefore, in the setting of complete information, this model not only generates a shakeout but also captures the stability of an industry. However, the solely national view of industrial evolution neglects the importance of international trade in determining the shape of market structure. In particular, I show that the higher trade barriers lead to more fragile markets, encouraging the over-entry in the initial stage of industry life cycle and raising the probability of a shakeout. Therefore, more liberalized trade generates more stable market structure from both national and international viewpoints. The main references are Ederington and McCalman(2008,2009).
Resumo:
QCD factorization in the Bjorken limit allows to separate the long-distance physics from the hard subprocess. At leading twist, only one parton in each hadron is coherent with the hard subprocess. Higher twist effects increase as one of the active partons carries most of the longitudinal momentum of the hadron, x -> 1. In the Drell-Yan process \pi N -> \mu^- mu^+ + X, the polarization of the virtual photon is observed to change to longitudinal when the photon carries x_F > 0.6 of the pion. I define and study the Berger-Brodsky limit of Q^2 -> \infty with Q^2(1-x) fixed. A new kind of factorization holds in the Drell-Yan process in this limit, in which both pion valence quarks are coherent with the hard subprocess, the virtual photon is longitudinal rather than transverse, and the cross section is proportional to a multiparton distribution. Generalized parton distributions contain information on the longitudinal momentum and transverse position densities of partons in a hadron. Transverse charge densities are Fourier transforms of the electromagnetic form factors. I discuss the application of these methods to the QED electron, studying the form factors, charge densities and spin distributions of the leading order |e\gamma> Fock state in impact parameter and longitudinal momentum space. I show how the transverse shape of any virtual photon induced process, \gamma^*(q)+i -> f, may be measured. Qualitative arguments concerning the size of such transitions have been previously made in the literature, but without a precise analysis. Properly defined, the amplitudes and the cross section in impact parameter space provide information on the transverse shape of the transition process.
Resumo:
In the context of health care, information technology (IT) has an important role in the operational infrastructure, ranging from business management to patient care. An essential part of the system is medication management in inpatient and outpatient care. Community pharmacists strategy has been to extend practice responsibilities beyond dispensing towards patient care services. Few studies have evaluated the strategic development of IT systems to support this vision. The objectives of this study were to assess and compare independent Finnish community pharmacy owners and staff pharmacists priorities concerning the content and structure of the next generation of community pharmacy IT systems, to explore international experts visions and strategic views on IT development needs in relation to services provided in community pharmacies, to identify IT innovations facilitating patient care services and to evaluate their development and implementation processes, and to assess community pharmacists readiness to adopt innovations. This study applied both qualitative and quantitative methods. A qualitative personal interview of 14 experts in community pharmacy services and related IT from eight countries and a national survey of Finnish community pharmacy owners (mail survey, response rate 53%, n=308), and of a representative sample of staff pharmacists (online survey, response rate 22%, n=373) were conducted. Finnish independent community pharmacy owners gave priority to logistical functions but also to those related to medication information and patient care. The managers and staff pharmacists have different views of the importance of IT features, reflecting their different professional duties in the community pharmacy. This indicates the need for involving different occupation groups in planning the new IT systems for community pharmacies. A majority of the international experts shared the vision of community pharmacy adopting a patient care orientation; supported by IT-based documentation, new technological solutions, access to information, and shared patient data. Community pharmacy IT innovations were rare, which is paradoxical because owners and staff pharmacists perception of their innovativeness was seen as being high. Community pharmacy IT systems development processes usually had not undergone systematic needs assessment research beforehand or evaluation after the implementation and were most often coordinated by national governments without subsequent commercialization. Specifically, community pharmacy IT developments lack research, organization, leadership and user involvement in the process. Those responsible for IT development in the community pharmacy sector should create long-term IT development strategies that are in line with community pharmacy service development strategies. This could provide systematic guidance for future projects to ensure that potential innovations are based on a sufficient understanding of pharmacy practice problems that they are intended to solve, and to encourage strong leadership in research, development of innovations so that community pharmacists potential innovativeness is used, and that professional needs and strategic priorities will be considered even if the development process is led by those outside the profession.
Resumo:
To protect and restore lake ecosystems under threats posed by the increasing human population, information on their ecological quality is needed. Lake sediments provide a data rich archive that allows identification of various biological components present prior to anthropogenic alterations as well as a constant record of changes. By providing a longer dimension of time than any ongoing monitoring programme, palaeolimnological methods can help in understanding natural variability and long-term ecological changes in lakes. As zooplankton have a central role in the lake food web, their remains can potentially provide versatile information on past trophic structure. However, various taphonomic processes operating in the lakes still raise questions concerning how subfossil assemblages reflect living communities. This thesis work aimed at improving the use of sedimentary zooplankton remains in the reconstruction of past zooplankton communities and the trophic structure in lakes. To quantify interspecific differences in the accumulation of remains, the subfossils of nine pelagic zooplankton taxa in annually laminated sediments were compared with monitoring results for live zooplankton in Lake Vesijärvi. This lake has a known history of eutrophication and recovery, which resulted from reduced external loading and effective fishing of plankti-benthivorous fish. The response of zooplankton assemblages to these known changes was resolved using annually laminated sediments. The generality of the responses observed in Lake Vesijärvi were further tested with a set of 31 lakes in Southern Finland, relating subfossils in surface sediments to contemporary water quality and fish density, as well as to lake morphometry. The results demonstrated differential preservation and retention of cladoceran species in the sediment. Daphnia, Diaphanosoma and Ceriodaphnia were clearly underrepresented in the sediment samples in comparison to well-preserved Bosmina species, Chydorus, Limnosida and Leptodora. For well-preserved species, the annual net accumulation rate was similar to or above the expected values, reflecting effective sediment focusing and accumulation in the deepest part of the lake. The decreased fish density and improved water quality led to subtle changes in zooplankton community composition. The abundance of Diaphanosoma and Limnosida increased after the reduction in fish density, while Ceriodaphnia and rotifers decreased. The most sensitive indicator of fish density was the mean size of Daphnia ephippia and Bosmina (E.) crassicornis ephippia and carapaces. The concentration of plant-associated species increased, reflecting expanding littoral vegetation along with increasing transparency. Several of the patterns observed in Lake Vesijärvi could also be found within the set of 31 lakes. According to this thesis work, the most useful cladoceran-based indices for nutrient status and planktivorous fish density in Finnish lakes were the relative abundances of certain pelagic taxa, and the mean size of Bosmina spp. carapaces, especially those of Bosmina (E.) cf. coregoni. The abundance of plant-associated species reflected the potential area for aquatic plants. Lake morphometry and sediment organic content, however, explained a relatively high proportion of the variance in the species data, and more studies are needed to quantify lake-specific differences in the accumulation and preservation of remains. Commonly occurring multicollinearity between environmental variables obstructs the cladoceran-based reconstruction of single environmental variables. As taphonomic factors and several direct and indirect structuring forces in lake ecosystems simultaneously affect zooplankton, the subfossil assemblages should be studied in a holistic way before making final conclusions about the trophic structure and the change in lake ecological quality.
Resumo:
Kävelykadut ovat tunnustettu tapa elävöittää keskusta-alueiden kauppaa. Aluksi moni kauppias epäilee kävelykadun tuomia muutoksia, mutta kokemus osoittaa, että kävelykadut ovat olleet menestyksekkäitä ja nostavat siellä olevien yritysten myyntiä. Jotkut yritykset eivät kuitenkin hyödy kävelykaduista, kun taas toiset hyötyvät paljon kun katu muuttuu kävelykaduksi. Tämä pro gradu -tutkielma tutkii kävelykatujen kaupallista rakennetta, jotta saataisiin selville minkätyyppiset yritykset löytyvät kävelykadulta. Tuloksia verrataan sen kaupallisen keskusvyöhykkeen kaupalliseen rakenteeseen missä kävelykatu sijaitsee. Näin saadaan selville erot kaupallisessa rakenteessa. Pro gradu tutkii myös miten tavallisia ketjuyritykset ovat kävelykaduilla ja kaupallisissa keskusvyöhykkeissä. Tutkimusaineisto koottiin kaupallisen inventoinnin avulla, joka suoritettiin kolmessa suomalaisessa kaupungissa: Tammisaaressa, Keravalla ja Porissa. Saatu aineisto luokiteltiin ja tulokset piirrettiin kartalle. Perustilastollisia menetelmiä käytettiin tulosten analysoimisessa. Tulokset eriteltiin kävelykadun, kauppakeskusten ja muiden paikkojen osalta ja luokiteltiin yleisluokkiin vähittäiskauppa, ravintola ja muu palvelu. Tulokset näyttävät, että on olemassa selkeitä eroja kun vertaa kävelykatuja ja kaupallisia keskusvyöhykkeitä. Kävelykaduilla on paljon enemmän vähittäiskauppoja, etenkin muotikauppoja, kuin muilla kaduilla. Kauppakeskuksilla on samantapainen kaupallinen rakenne kuin kävelykaduilla kun taas muilla kaduilla esiintyy vähemmän vähittäiskauppoja ja enemmän palveluyrityksiä. Ravintolat ovat melkein yhtä tavallisia koko kaupallisessa keskusvyöhykkeessä. Ketjuyritysten osalta tulokset ovat epäselviä. On olemassa osviittaa siitä, että ne ovat tavallisempia kävelykaduilla, etenkin suurissa kaupungeissa. Saatua tulosta ei ole kuitenkin tarpeeksi, jotta varmaa tietoa olisi saatu. Viimeisten 10–15 vuoden ajan Suomen kävelykadut ovat muuttuneet enemmän ravintolavaltaisiksi muiden palveluiden kustannuksella. Vähittäiskauppojen määrä on pysynyt vakaana. Suomalaiset kävelykadut eroavat kaupalliselta rakenteeltaan pohjoismaisista kävelykaduista, joilla on enemmän vähittäiskauppoja ja vähemmän palveluyrityksiä. Tapauskohtaisissa tuloksissa esiintyy paljon eroavaisuuksia. Paikalliset tekijät ovat usein voimakkaampia kuin yleiset teoriat kauppojen sijainnista kävelykaduilla. Yleisesti ottaen tulokset tukevat teoreettista viitekehystä. Tulokset antavat tarkempaa tietoa kävelykatujen ja kaupallisten keskusvyöhykkeiden kaupallisesta rakenteesta ja siitä, mitkä tekijät tähän vaikuttaa.