30 resultados para Razão instrumental
Resumo:
Numerical weather prediction (NWP) models provide the basis for weather forecasting by simulating the evolution of the atmospheric state. A good forecast requires that the initial state of the atmosphere is known accurately, and that the NWP model is a realistic representation of the atmosphere. Data assimilation methods are used to produce initial conditions for NWP models. The NWP model background field, typically a short-range forecast, is updated with observations in a statistically optimal way. The objective in this thesis has been to develope methods in order to allow data assimilation of Doppler radar radial wind observations. The work has been carried out in the High Resolution Limited Area Model (HIRLAM) 3-dimensional variational data assimilation framework. Observation modelling is a key element in exploiting indirect observations of the model variables. In the radar radial wind observation modelling, the vertical model wind profile is interpolated to the observation location, and the projection of the model wind vector on the radar pulse path is calculated. The vertical broadening of the radar pulse volume, and the bending of the radar pulse path due to atmospheric conditions are taken into account. Radar radial wind observations are modelled within observation errors which consist of instrumental, modelling, and representativeness errors. Systematic and random modelling errors can be minimized by accurate observation modelling. The impact of the random part of the instrumental and representativeness errors can be decreased by calculating spatial averages from the raw observations. Model experiments indicate that the spatial averaging clearly improves the fit of the radial wind observations to the model in terms of observation minus model background (OmB) standard deviation. Monitoring the quality of the observations is an important aspect, especially when a new observation type is introduced into a data assimilation system. Calculating the bias for radial wind observations in a conventional way can result in zero even in case there are systematic differences in the wind speed and/or direction. A bias estimation method designed for this observation type is introduced in the thesis. Doppler radar radial wind observation modelling, together with the bias estimation method, enables the exploitation of the radial wind observations also for NWP model validation. The one-month model experiments performed with the HIRLAM model versions differing only in a surface stress parameterization detail indicate that the use of radar wind observations in NWP model validation is very beneficial.
Resumo:
This work focuses on the role of macroseismology in the assessment of seismicity and probabilistic seismic hazard in Northern Europe. The main type of data under consideration is a set of macroseismic observations available for a given earthquake. The macroseismic questionnaires used to collect earthquake observations from local residents since the late 1800s constitute a special part of the seismological heritage in the region. Information of the earthquakes felt on the coasts of the Gulf of Bothnia between 31 March and 2 April 1883 and on 28 July 1888 was retrieved from the contemporary Finnish and Swedish newspapers, while the earthquake of 4 November 1898 GMT is an example of an early systematic macroseismic survey in the region. A data set of more than 1200 macroseismic questionnaires is available for the earthquake in Central Finland on 16 November 1931. Basic macroseismic investigations including preparation of new intensity data point (IDP) maps were conducted for these earthquakes. Previously disregarded usable observations were found in the press. The improved collection of IDPs of the 1888 earthquake shows that this event was a rare occurrence in the area. In contrast to earlier notions it was felt on both sides of the Gulf of Bothnia. The data on the earthquake of 4 November 1898 GMT were augmented with historical background information discovered in various archives and libraries. This earthquake was of some concern to the authorities, because extra fire inspections were conducted in three towns at least, i.e. Tornio, Haparanda and Piteå, located in the centre of the area of perceptibility. This event posed the indirect hazard of fire, although its magnitude around 4.6 was minor on the global scale. The distribution of slightly damaging intensities was larger than previously outlined. This may have resulted from the amplification of the ground shaking in the soft soil of the coast and river valleys where most of the population was found. The large data set of the 1931 earthquake provided an opportunity to apply statistical methods and assess methodologies that can be used when dealing with macroseismic intensity. It was evaluated using correspondence analysis. Different approaches such as gridding were tested to estimate the macroseismic field from the intensity values distributed irregularly in space. In general, the characteristics of intensity warrant careful consideration. A more pervasive perception of intensity as an ordinal quantity affected by uncertainties is advocated. A parametric earthquake catalogue comprising entries from both the macroseismic and instrumental era was used for probabilistic seismic hazard assessment. The parametric-historic methodology was applied to estimate seismic hazard at a given site in Finland and to prepare a seismic hazard map for Northern Europe. The interpretation of these results is an important issue, because the recurrence times of damaging earthquakes may well exceed thousands of years in an intraplate setting such as Northern Europe. This application may therefore be seen as an example of short-term hazard assessment.
Resumo:
Atmospheric aerosol particles have a strong impact on the global climate. A deep understanding of the physical and chemical processes affecting the atmospheric aerosol climate system is crucial in order to describe those processes properly in global climate models. Besides the climatic effects, aerosol particles can deteriorate e.g. visibility and human health. Nucleation is a fundamental step in atmospheric new particle formation. However, details of the atmospheric nucleation mechanisms have remained unresolved. The main reason for that has been the non-existence of instruments capable of measuring neutral newly formed particles in the size range below 3 nm in diameter. This thesis aims to extend the detectable particle size range towards close-to-molecular sizes (~1nm) of freshly nucleated clusters, and by direct measurement obtain the concentrations of sub-3 nm particles in atmospheric environment and in well defined laboratory conditions. In the work presented in this thesis, new methods and instruments for the sub-3 nm particle detection were developed and tested. The selected approach comprises four different condensation based techniques and one electrical detection scheme. All of them are capable to detect particles with diameters well below 3 nm, some even down to ~1 nm. The developed techniques and instruments were deployed in the field measurements as well as in laboratory nucleation experiments. Ambient air studies showed that in a boreal forest environment a persistent population of 1-2 nm particles or clusters exists. The observation was done using 4 different instruments showing a consistent capability for the direct measurement of the atmospheric nucleation. The results from the laboratory experiments showed that sulphuric acid is a key species in the atmospheric nucleation. The mismatch between the earlier laboratory data and ambient observations on the dependency of nucleation rate on sulphuric acid concentration was explained. The reason was shown to be associated in the inefficient growth of the nucleated clusters and in the insufficient detection efficiency of particle counters used in the previous experiments. Even though the exact molecular steps of nucleation still remain an open question, the instrumental techniques developed in this work as well as their application in laboratory and ambient studies opened a new view into atmospheric nucleation and prepared the way for investigating the nucleation processes with more suitable tools.
Resumo:
Accelerator mass spectrometry (AMS) is an ultrasensitive technique for measuring the concentration of a single isotope. The electric and magnetic fields of an electrostatic accelerator system are used to filter out other isotopes from the ion beam. The high velocity means that molecules can be destroyed and removed from the measurement background. As a result, concentrations down to one atom in 10^16 atoms are measurable. This thesis describes the construction of the new AMS system in the Accelerator Laboratory of the University of Helsinki. The system is described in detail along with the relevant ion optics. System performance and some of the 14C measurements done with the system are described. In a second part of the thesis, a novel statistical model for the analysis of AMS data is presented. Bayesian methods are used in order to make the best use of the available information. In the new model, instrumental drift is modelled with a continuous first-order autoregressive process. This enables rigorous normalization to standards measured at different times. The Poisson statistical nature of a 14C measurement is also taken into account properly, so that uncertainty estimates are much more stable. It is shown that, overall, the new model improves both the accuracy and the precision of AMS measurements. In particular, the results can be improved for samples with very low 14C concentrations or measured only a few times.
Resumo:
This study examines Institutional Twinning in Morocco as a case of EU cooperation through the pragmatic, ethical and moral logics of reason in Jürgen Habermas’s discourse ethics. As a former accession tool, Twinning was introduced in 2004 for legal approximation in the context of the European Neighborhood Policy. Twinning is a unique instrument in development cooperation from a legal perspective. With its long historical and cultural ties to Europe, Morocco presents an interesting case study of this new form of cooperation. We will analyse motives behind the Twinning projects on illegal immigration, environment legislation and customs reform. As Twinning is a new policy instrument within the ENP context, there is relatively little preceding research, which, in itself, constitutes a reason to inquire into the subject. While introducing useful categories, the approaches discussing “normative power Europe” do not offer methodological tools precise enough to analyse the motives of the Twinning cooperation from a broad ethical standpoint. Helene Sjursen as well as Esther Barbé and Elisabeth Johansson-Nogués have elaborated on Jürgen Habermas’ discourse ethics in determining the extent of altruism in the ENP in general. Situating the analysis in the process-oriented framework of Critical Theory, discourse ethics provides the methodological framework for our research. The case studies reveal that the context in which they operate affects the pragmatic, ethical and moral aspirations of the actors. The utilitarian notion of profit maximization is quite pronounced both in terms of the number of Twinning projects in the economic sphere and the pragmatic logics of reason instrumental to security and trade-related issues. The historical background as well internal processes, however, contribute to defining areas of mutual interest to the actors as well as the motives Morocco and the EU sometimes described as the external projection of internal values. Through its different aspects, Twinning cooperation portrays the functioning of the pragmatic, ethical and moral logics of reason in international relations.
Resumo:
According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.
Resumo:
The object of study in this thesis is Finnish skiing culture and Alpine skiing in particular from the point of view of ethnology. The objective is to clarify how, when, why and by what routes Alpine skiing found its way to Finland. What other phenomena did it bring forth? The objective is essentially linked to the diffusion of modern sports culture to Finland. The introduction of Alpine skiing to Finland took place at a time when skiing culture was changing: flat terrain skiing was abandoned in favour of cross-country skiing in the early decades of the 20th century, and new techniques and equipment made skiing a much more versatile sport. The time span of the study starts from the late 19th century and ends in the mid-20th century. The spatial focus is in Finland. People and communities formed through their actions are core elements in the study of sports and physical activity. Organizations tend to raise themselves into influential actors in the field of physical culture even if active individuals work in their background. Original archive documents and publications of sports organizations are central source material for this thesis, complemented by newspapers and sports magazines as well as photographs and films on early Alpine skiing in Finland. Ever since their beginning in the late 19th century skiing races in Finland had mostly taken place on flat terrain or sea ice. Skiing in broken cross-country terrain made its breakthrough in the 1920 s, at a time when modern skiing techniques were introduced in instruction manuals. In the late 1920 s the Finnish Women s Physical Education Association (SNLL) developed unconventional forms of pedagogical skiing instruction. They abandoned traditional Finnish flat terrain skiing and boldly looked for influences abroad, which caused friction between the leaders of the women s sports movement and the (male) leaders of the central skiing organization. SNLL was instrumental in launching winter tourism in Finnish Lapland in 1933. The Finnish Tourism Society, the State Railways and sports organizations worked in close co-operation to instigate a boom in tourism, which culminated in the inauguration of a tourist hotel at Pallastunturi hill in the winter of 1938. Following a Swedish model, fell-skiing was developed as a domestic counterpart to Alpine skiing as practiced in Central Europe. The first Finnish skiing resorts were built at sites of major cross-country skiing races. Inspired by the slope at Bad Grankulla health spa, the first slalom skiing races and fell-skiing, slalom enthusiasts began to look for purpose-built sites to practice turn technique. At first they would train in natural slopes but in the late 1930 s new slopes were cleared for slalom races and recreational skiing. The building of slopes and ski lifts and the emergence of organized slalom racing competitions gradually separated Alpine skiing from the old fell-skiing. After the Second World War fell-skiing was transformed into ski trekking on marked courses. At the same time Alpine skiing also parted ways with cross-country skiing to become a sport of its own. In the 1940 s and 1950 s Finnish Alpine skiing was almost exclusively a competitive sport. The specificity of Alpine skiing was enhanced by rapid development of equipment: the new skis, bindings and shoes could only be used going downhill.
Resumo:
We present results of a search for anomalous production of two photons together with an electron, muon, $\tau$ lepton, missing transverse energy, or jets using $p\bar{p}$ collision data from 1.1-2.0 fb$^{-1}$ of integrated luminosity collected by the Collider Detector at Fermilab (CDF). The event yields and kinematic distributions are examined for signs of new physics without favoring a specific model of new physics. The results are consistent with the standard model expectations. The search employs several new analysis techniques that significantly reduce instrumental backgrounds in channels with an electron and missing transverse energy.
Resumo:
Scholarly research has produced conceptual knowledge that is based on real-life marketing phenomena. An initial aim of past research has been to produce marketing knowledge as a base for efficient business operation and for the improvement of productivity. Thus, an assumption has been that the knowledge would be applied by organisations. This study focuses on understanding the use of marketing knowledge within the field of service marketing. Hence, even if marketing knowledge about service-oriented principles and marketing of services is based on empirical research, there is a lack of knowledge on how this marketing knowledge is in fact applied by businesses. The study focuses on four essential concepts of services marketing knowledge, namely service quality, servicescape, internal marketing, and augmented service offering. The research involves four case companies. Data is based on in depth interviews and questionnaire-based surveys conducted with managers, employees, and customers of these companies. All organisations were currently developing in a service-oriented and customer-oriented direction. However, we found limitations, gaps, and barriers for the implementation of service-oriented and customer-oriented principles. Hence, we argue that the organisations involved in the study exploited conceptual knowledge symbolically and conceptually, but the instrumental use of knowledge was limited. Due to the shortcomings found, we also argue that the implementation of the various practices and processes that are related to becoming service-oriented and customer-oriented has not been fully successful. Further, we have come to the conclusion that the shortcomings detected were at least in some respect related to the fact that the understanding and utilisation of conceptual knowledge of service-oriented principles and marketing of services were somewhat limited.
Resumo:
Many Finnish IT companies have gone through numerous organizational changes over the past decades. This book draws attention to how stability may be central to software product development experts and IT workers more generally, who continuously have to cope with such change in their workplaces. It does so by analyzing and theorizing change and stability as intertwined and co-existent, thus throwing light on how it is possible that, for example, even if ‘the walls fall down the blokes just code’ and maintain a sense of stability in their daily work. Rather than reproducing the picture of software product development as exciting cutting edge activities and organizational change as dramatic episodes, the study takes the reader beyond the myths surrounding these phenomena to the mundane practices, routines and organizings in product development during organizational change. An analysis of these ordinary practices offers insights into how software product development experts actively engage in constructing stability during organizational change through a variety of practices, including solidarity, homosociality, close relations to products, instrumental or functional views on products, preoccupations with certain tasks and humble obedience. Consequently, the study shows that it may be more appropriate to talk about varieties of stability, characterized by a multitude of practices of stabilizing rather than states of stagnation. Looking at different practices of stability in depth shows the creation of software as an arena for micro-politics, power relations and increasing pressures for order and formalization. The thesis gives particular attention to power relations and processes of positioning following organizational change: how social actors come to understand themselves in the context of ongoing organizational change, how they comply with and/or contest dominant meanings, how they identify and dis-identify with formalization, and how power relations often are reproduced despite dis-identification. Related to processes of positioning, the reader is also given a glimpse into what being at work in a male-dominated and relatively homogeneous work environment looks like. It shows how the strong presence of men or “blokes” of a particular age and education seems to become invisible in workplace talk that appears ‘non-conscious’ of gender.
Resumo:
This doctoral thesis analyses the concepts of good governance and good administration. The hypothesis is that the concepts are radically indeterminate and over-inclusive. In the study the mechanisms of this indeterminacy are examined: why are the concepts indeterminate; how does the indeterminacy work and, indeed, is it by any means plausible to try to define the concepts in a closed way? Therefore, the study focuses on various current perspectives, from which the concepts of good governance and good administration are relevant and what kind of discursive contents they may include. The approach is both legal (a right to good administration) and one of moral philosophy and discourse analysis. It appears that under the meta-discourse of good governance and good administration there are different sub-discourses: at least a legal sub-discourse, a moral/ethical sub-discourse and sub-discourses concerning economic effectiveness and the promotion of societal and economic development. The main claim is that the various sub-discourses do not necessarily identify each other s value premises and conceptual underpinnings: for which value could the attribute good be substituted in different discourses (for example, good as legal, good as ethical and so on)? The underlying presumption is, of course, that values are ultimately subjective and incommensurable. One possible way of trying to resolve the dynamics of possible discourse collisions is to employ the systems theory approach. Can the different discourses be interpreted as autopoietic systems, which create and change themselves according to their own criteria and are formed around a binary code? Can the different discourses be reconciled or are they indifferent or hostile towards each other? Is there a hegemonic super discourse or is the construction of a correct meaning purely contextual? The questions come back to the notions of administration and governance themselves the terms the good in its polymorphic ways is attempting to define. Do they engage different political rationalities? It can be suggested that administration is labelled by instrumental reason, governance by teleological reason. In the final analysis, the most crucial factor is that of power. It is about a Schmittian battle of concepts; how meanings are constructed in the interplay between conceptual ambiguity and social power. Thus, the study deals with administrative law, legal theory and the limits of law from the perspective of revealing critique.
Resumo:
The Earth s climate is a highly dynamic and complex system in which atmospheric aerosols have been increasingly recognized to play a key role. Aerosol particles affect the climate through a multitude of processes, directly by absorbing and reflecting radiation and indirectly by changing the properties of clouds. Because of the complexity, quantification of the effects of aerosols continues to be a highly uncertain science. Better understanding of the effects of aerosols requires more information on aerosol chemistry. Before the determination of aerosol chemical composition by the various available analytical techniques, aerosol particles must be reliably sampled and prepared. Indeed, sampling is one of the most challenging steps in aerosol studies, since all available sampling techniques harbor drawbacks. In this study, novel methodologies were developed for sampling and determination of the chemical composition of atmospheric aerosols. In the particle-into-liquid sampler (PILS), aerosol particles grow in saturated water vapor with further impaction and dissolution in liquid water. Once in water, the aerosol sample can then be transported and analyzed by various off-line or on-line techniques. In this study, PILS was modified and the sampling procedure was optimized to obtain less altered aerosol samples with good time resolution. A combination of denuders with different coatings was tested to adsorb gas phase compounds before PILS. Mixtures of water with alcohols were introduced to increase the solubility of aerosols. Minimum sampling time required was determined by collecting samples off-line every hour and proceeding with liquid-liquid extraction (LLE) and analysis by gas chromatography-mass spectrometry (GC-MS). The laboriousness of LLE followed by GC-MS analysis next prompted an evaluation of solid-phase extraction (SPE) for the extraction of aldehydes and acids in aerosol samples. These two compound groups are thought to be key for aerosol growth. Octadecylsilica, hydrophilic-lipophilic balance (HLB), and mixed phase anion exchange (MAX) were tested as extraction materials. MAX proved to be efficient for acids, but no tested material offered sufficient adsorption for aldehydes. Thus, PILS samples were extracted only with MAX to guarantee good results for organic acids determined by liquid chromatography-mass spectrometry (HPLC-MS). On-line coupling of SPE with HPLC-MS is relatively easy, and here on-line coupling of PILS with HPLC-MS through the SPE trap produced some interesting data on relevant acids in atmospheric aerosol samples. A completely different approach to aerosol sampling, namely, differential mobility analyzer (DMA)-assisted filter sampling, was employed in this study to provide information about the size dependent chemical composition of aerosols and understanding of the processes driving aerosol growth from nano-size clusters to climatically relevant particles (>40 nm). The DMA was set to sample particles with diameters of 50, 40, and 30 nm and aerosols were collected on teflon or quartz fiber filters. To clarify the gas-phase contribution, zero gas-phase samples were collected by switching off the DMA every other 15 minutes. Gas-phase compounds were adsorbed equally well on both types of filter, and were found to contribute significantly to the total compound mass. Gas-phase adsorption is especially significant during the collection of nanometer-size aerosols and needs always to be taken into account. Other aims of this study were to determine the oxidation products of β-caryophyllene (the major sesquiterpene in boreal forest) in aerosol particles. Since reference compounds are needed for verification of the accuracy of analytical measurements, three oxidation products of β-caryophyllene were synthesized: β-caryophyllene aldehyde, β-nocaryophyllene aldehyde, and β-caryophyllinic acid. All three were identified for the first time in ambient aerosol samples, at relatively high concentrations, and their contribution to the aerosol mass (and probably growth) was concluded to be significant. Methodological and instrumental developments presented in this work enable fuller understanding of the processes behind biogenic aerosol formation and provide new tools for more precise determination of biosphere-atmosphere interactions.
Resumo:
This study examines Institutional Twinning in Morocco as a case of EU cooperation through the pragmatic, ethical and moral logics of reason in Jürgen Habermas’s discourse ethics. As a former accession tool, Twinning was introduced in 2004 for legal approximation in the context of the European Neighborhood Policy. Twinning is a unique instrument in development cooperation from a legal perspective. With its long historical and cultural ties to Europe, Morocco presents an interesting case study of this new form of cooperation. We will analyse motives behind the Twinning projects on illegal immigration, environment legislation and customs reform. As Twinning is a new policy instrument within the ENP context, there is relatively little preceding research, which, in itself, constitutes a reason to inquire into the subject. While introducing useful categories, the approaches discussing “normative power Europe” do not offer methodological tools precise enough to analyse the motives of the Twinning cooperation from a broad ethical standpoint. Helene Sjursen as well as Esther Barbé and Elisabeth Johansson-Nogués have elaborated on Jürgen Habermas’ discourse ethics in determining the extent of altruism in the ENP in general. Situating the analysis in the process-oriented framework of Critical Theory, discourse ethics provides the methodological framework for our research. The case studies reveal that the context in which they operate affects the pragmatic, ethical and moral aspirations of the actors. The utilitarian notion of profit maximization is quite pronounced both in terms of the number of Twinning projects in the economic sphere and the pragmatic logics of reason instrumental to security and trade-related issues. The historical background as well internal processes, however, contribute to defining areas of mutual interest to the actors as well as the motives Morocco and the EU sometimes described as the external projection of internal values. Through its different aspects, Twinning cooperation portrays the functioning of the pragmatic, ethical and moral logics of reason in international relations.
Resumo:
Fast excitatory transmission between neurons in the central nervous system is mainly mediated by L-glutamate acting on ligand gated (ionotropic) receptors. These are further categorized according to their pharmacological properties to AMPA (2-amino-3-(5-methyl-3-oxo-1,2- oxazol-4-yl)propanoic acid), NMDA (N-Methyl-D-aspartic acid) and kainate (KAR) subclasses. In the rat and the mouse hippocampus, development of glutamatergic transmission is most dynamic during the first postnatal weeks. This coincides with the declining developmental expression of the GluK1 subunit-containing KARs. However, the function of KARs during early development of the brain is poorly understood. The present study reveals novel types of tonically active KARs (hereafter referred to as tKARs) which play a central role in functional development of the hippocampal CA3-CA1 network. The study shows for the first time how concomitant pre- and postsynaptic KAR function contributes to development of CA3-CA1 circuitry by regulating transmitter release and interneuron excitability. Moreover, the tKAR-dependent regulation of transmitter release provides a novel mechanism for silencing and unsilencing early synapses and thus shaping the early synaptic connectivity. The role of GluK1-containing KARs was studied in area CA3 of the neonatal hippocampus. The data demonstrate that presynaptic KARs in excitatory synapses to both pyramidal cells and interneurons are tonically activated by ambient glutamate and that they regulate glutamate release differentially, depending on target cell type. At synapses to pyramidal cells these tKARs inhibit glutamate release in a G-protein dependent manner but in contrast, at synapses to interneurons, tKARs facilitate glutamate release. On the network level these mechanisms act together upregulating activity of GABAergic microcircuits and promoting endogenous hippocampal network oscillations. By virtue of this, tKARs are likely to have an instrumental role in the functional development of the hippocampal circuitry. The next step was to investigate the role of GluK1 -containing receptors in the regulation of interneuron excitability. The spontaneous firing of interneurons in the CA3 stratum lucidum is markedly decreased during development. The shift involves tKARs that inhibit medium-duration afterhyperpolarization (mAHP) in these neurons during the first postnatal week. This promotes burst spiking of interneurons and thereby increases GABAergic activity in the network synergistically with the tKAR-mediated facilitation of their excitatory drive. During development the amplitude of evoked medium afterhyperpolarizing current (ImAHP) is dramatically increased due to decoupling tKAR activation and ImAHP modulation. These changes take place at the same time when the endogeneous network oscillations disappear. These tKAR-driven mechanisms in the CA3 area regulate both GABAergic and glutamatergic transmission and thus gate the feedforward excitatory drive to the area CA1. Here presynaptic tKARs to CA1 pyramidal cells suppress glutamate release and enable strong facilitation in response to high-frequency input. Therefore, CA1 synapses are finely tuned to high-frequency transmission; an activity pattern that is common in neonatal CA3-CA1 circuitry both in vivo and in vitro. The tKAR-regulated release probability acts as a novel presynaptic silencing mechanism that can be unsilenced in response to Hebbian activity. The present results shed new light on the mechanisms modulating the early network activity that paves the way for oscillations lying behind cognitive tasks such as learning and memory. Kainate receptor antagonists are already being developed for therapeutic use for instance against pain and migraine. Because of these modulatory actions, tKARs also represent an attractive candidate for therapeutic treatment of developmentally related complications such as learning disabilities.
Resumo:
This thesis studies the effect of income inequality on economic growth. This is done by analyzing panel data from several countries with both short and long time dimensions of the data. Two of the chapters study the direct effect of inequality on growth, and one chapter also looks at the possible indirect effect of inequality on growth by assessing the effect of inequality on savings. In Chapter two, the effect of inequality on growth is studied by using a panel of 70 countries and a new EHII2008 inequality measure. Chapter contributes on two problems that panel econometric studies on the economic effect of inequality have recently encountered: the comparability problem associated with the commonly used Deininger and Squire s Gini index, and the problem relating to the estimation of group-related elasticities in panel data. In this study, a simple way to 'bypass' vagueness related to the use of parametric methods to estimate group-related parameters is presented. The idea is to estimate the group-related elasticities implicitly using a set of group-related instrumental variables. The estimation results with new data and method indicate that the relationship between income inequality and growth is likely to be non-linear. Chapter three incorporates the EHII2.1 inequality measure and a panel with annual time series observations from 38 countries to test the existence of long-run equilibrium relation(s) between inequality and the level of GDP. Panel unit root tests indicate that both the logarithmic EHII2.1 inequality measure and the logarithmic GDP per capita series are I(1) nonstationary processes. They are also found to be cointegrated of order one, which implies that there is a long-run equilibrium relation between them. The long-run growth elasticity of inequality is found to be negative in the middle-income and rich economies, but the results for poor economies are inconclusive. In the fourth Chapter, macroeconomic data on nine developed economies spanning across four decades starting from the year 1960 is used to study the effect of the changes in the top income share to national and private savings. The income share of the top 1 % of population is used as proxy for the distribution of income. The effect of inequality on private savings is found to be positive in the Nordic and Central-European countries, but for the Anglo-Saxon countries the direction of the effect (positive vs. negative) remains somewhat ambiguous. Inequality is found to have an effect national savings only in the Nordic countries, where it is positive.