61 resultados para Macroeconomic theory
em Helda - Digital Repository of University of Helsinki
Resumo:
The dissertation consists of an introductory chapter and three essays that apply search-matching theory to study the interaction of labor market frictions, technological change and macroeconomic fluctuations. The first essay studies the impact of capital-embodied growth on equilibrium unemployment by extending a vintage capital/search model to incorporate vintage human capital. In addition to the capital obsolescence (or creative destruction) effect that tends to raise unemployment, vintage human capital introduces a skill obsolescence effect of faster growth that has the opposite sign. Faster skill obsolescence reduces the value of unemployment, hence wages and leads to more job creation and less job destruction, unambiguously reducing unemployment. The second essay studies the effect of skill biased technological change on skill mismatch and the allocation of workers and firms in the labor market. By allowing workers to invest in education, we extend a matching model with two-sided heterogeneity to incorporate an endogenous distribution of high and low skill workers. We consider various possibilities for the cost of acquiring skills and show that while unemployment increases in most scenarios, the effect on the distribution of vacancy and worker types varies according to the structure of skill costs. When the model is extended to incorporate endogenous labor market participation, we show that the unemployment rate becomes less informative of the state of the labor market as the participation margin absorbs employment effects. The third essay studies the effects of labor taxes on equilibrium labor market outcomes and macroeconomic dynamics in a New Keynesian model with matching frictions. Three policy instruments are considered: a marginal tax and a tax subsidy to produce tax progression schemes, and a replacement ratio to account for variability in outside options. In equilibrium, the marginal tax rate and replacement ratio dampen economic activity whereas tax subsidies boost the economy. The marginal tax rate and replacement ratio amplify shock responses whereas employment subsidies weaken them. The tax instruments affect the degree to which the wage absorbs shocks. We show that increasing tax progression when taxation is initially progressive is harmful for steady state employment and output, and amplifies the sensitivity of macroeconomic variables to shocks. When taxation is initially proportional, increasing progression is beneficial for output and employment and dampens shock responses.
Resumo:
Perhaps the most fundamental prediction of financial theory is that the expected returns on financial assets are determined by the amount of risk contained in their payoffs. Assets with a riskier payoff pattern should provide higher expected returns than assets that are otherwise similar but provide payoffs that contain less risk. Financial theory also predicts that not all types of risks should be compensated with higher expected returns. It is well-known that the asset-specific risk can be diversified away, whereas the systematic component of risk that affects all assets remains even in large portfolios. Thus, the asset-specific risk that the investor can easily get rid of by diversification should not lead to higher expected returns, and only the shared movement of individual asset returns – the sensitivity of these assets to a set of systematic risk factors – should matter for asset pricing. It is within this framework that this thesis is situated. The first essay proposes a new systematic risk factor, hypothesized to be correlated with changes in investor risk aversion, which manages to explain a large fraction of the return variation in the cross-section of stock returns. The second and third essays investigate the pricing of asset-specific risk, uncorrelated with commonly used risk factors, in the cross-section of stock returns. The three essays mentioned above use stock market data from the U.S. The fourth essay presents a new total return stock market index for the Finnish stock market beginning from the opening of the Helsinki Stock Exchange in 1912 and ending in 1969 when other total return indices become available. Because a total return stock market index for the period prior to 1970 has not been available before, academics and stock market participants have not known the historical return that stock market investors in Finland could have achieved on their investments. The new stock market index presented in essay 4 makes it possible, for the first time, to calculate the historical average return on the Finnish stock market and to conduct further studies that require long time-series of data.
Resumo:
In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).
Resumo:
Tutkimuksessa analysoidaan kaaosteorian vaikutusta kaunokirjallisuudessa ja kirjallisuudentutkimuksessa ja esitetään, että kaaosteorian roolia kirjallisuuden kentällä voidaan parhaiten ymmärtää sen avaamien käsitteiden kautta. Suoran soveltamisen sijaan kaaosteorian avulla on käyty uudenlaisia keskusteluja vanhoista aiheista ja luonnontieteestä ammennetut käsitteet ovat johtaneet aiemmin tukkeutuneiden argumenttien avaamiseen uudesta näkökulmasta käsin. Väitöskirjassa keskitytään kolmeen osa-alueeseen: kaunokirjallisen teoksen rakenteen teoretisointiin, ihmisen (erityisesti tekijän) identiteetin hahmottamiseen ja kuvailemiseen sekä fiktion ja todellisuuden suhteen pohdintaan. Tutkimuksen tarkoituksena on osoittaa, kuinka kaaosteorian kautta näitä aiheita on lähestytty niin kirjallisuustieteessä kuin kaunokirjallisissa teoksissakin. Väitöskirjan keskiössä ovat romaanikirjailija John Barthin, dramatisti Tom Stoppardin ja runoilija Jorie Grahamin teosten analyysit. Nämä kirjailijat ammentavat kaaosteoriasta keinoja käsitteellistää rakenteita, jotka ovat yhtä aikaa dynaamisia prosesseja ja hahmotettavia muotoja. Kaunokirjallisina teemoina nousevat esiin myös ihmisen paradoksaalisesti tunnistettava ja aina muuttuva identiteetti sekä lopullista haltuunottoa pakeneva, mutta silti kiehtova ja tavoiteltava todellisuus. Näiden kirjailijoiden teosten analyysin sekä teoreettisen keskustelun kautta väitöskirjassa tuodaan esiin aiemmassa tutkimuksessa varjoon jäänyt, koherenssia, ymmärrettävyyttä ja realismia painottava humanistinen näkökulma kaaosteorian merkityksestä kirjallisuudessa.
Resumo:
This dissertation is a theoretical study of finite-state based grammars used in natural language processing. The study is concerned with certain varieties of finite-state intersection grammars (FSIG) whose parsers define regular relations between surface strings and annotated surface strings. The study focuses on the following three aspects of FSIGs: (i) Computational complexity of grammars under limiting parameters In the study, the computational complexity in practical natural language processing is approached through performance-motivated parameters on structural complexity. Each parameter splits some grammars in the Chomsky hierarchy into an infinite set of subset approximations. When the approximations are regular, they seem to fall into the logarithmic-time hierarchyand the dot-depth hierarchy of star-free regular languages. This theoretical result is important and possibly relevant to grammar induction. (ii) Linguistically applicable structural representations Related to the linguistically applicable representations of syntactic entities, the study contains new bracketing schemes that cope with dependency links, left- and right branching, crossing dependencies and spurious ambiguity. New grammar representations that resemble the Chomsky-Schützenberger representation of context-free languages are presented in the study, and they include, in particular, representations for mildly context-sensitive non-projective dependency grammars whose performance-motivated approximations are linear time parseable. (iii) Compilation and simplification of linguistic constraints Efficient compilation methods for certain regular operations such as generalized restriction are presented. These include an elegant algorithm that has already been adopted as the approach in a proprietary finite-state tool. In addition to the compilation methods, an approach to on-the-fly simplifications of finite-state representations for parse forests is sketched. These findings are tightly coupled with each other under the theme of locality. I argue that the findings help us to develop better, linguistically oriented formalisms for finite-state parsing and to develop more efficient parsers for natural language processing. Avainsanat: syntactic parsing, finite-state automata, dependency grammar, first-order logic, linguistic performance, star-free regular approximations, mildly context-sensitive grammars
Resumo:
Space in musical semiosis is a study of musical meaning, spatiality and composition. Earlier studies on musical composition have not adequately treated the problems of musical signification. Here, composition is considered an epitomic process of musical signification. Hence the core problems of composition theory are core problems of musical semiotics. The study employs a framework of naturalist pragmatism, based on C. S. Peirce’s philosophy. It operates on concepts such as subject, experience, mind and inquiry, and incorporates relevant ideas of Aristotle, Peirce and John Dewey into a synthetic view of esthetic, practic, and semiotic for the benefit of grasping musical signification process as a case of semiosis in general. Based on expert accounts, music is depicted as real, communicative, representational, useful, embodied and non-arbitrary. These describe how music and the musical composition process are mental processes. Peirce’s theories are combined with current morphological theories of cognition into a view of mind, in which space is central. This requires an analysis of space, and the acceptance of a relativist understanding of spatiality. This approach to signification suggests that mental processes are spatially embodied, by virtue of hard facts of the world, literal representations of objects, as well as primary and complex metaphors each sharing identities of spatial structures. Consequently, music and the musical composition process are spatially embodied. Composing music appears as a process of constructing metaphors—as a praxis of shaping and reshaping features of sound, representable from simple quality dimensions to complex domains. In principle, any conceptual space, metaphorical or literal, may set off and steer elaboration, depending on the practical bearings on the habits of feeling, thinking and action, induced in musical communication. In this sense, it is evident that music helps us to reorganize our habits of feeling, thinking, and action. These habits, in turn, constitute our existence. The combination of Peirce and morphological approaches to cognition serves well for understanding musical and general signification. It appears both possible and worthwhile to address a variety of issues central to musicological inquiry in the framework of naturalist pragmatism. The study may also contribute to the development of Peircean semiotics.
Resumo:
This work investigates the role of narrative literature in late-20th century and contemporary Anglo-American moral philosophy. It aims to show the trend of reading narrative literature for purposes of moral philosophy from the 1970 s and early 80 s to the present day as a part of a larger movement in Anglo-American moral philosophy, and to present a view of its significance for moral philosophy overall. Chapter 1 provides some preliminaries concerning the view of narrative literature which my discussion builds on. In chapter 2 I give an outline of how narrative literature is considered in contemporary Anglo-American moral philosophy, and connect this use to the broad trend of neo-Aristotelian ethics in this context. In chapter 3 I connect the use of literature to the idea of the non-generalizability of moral perception and judgment, which is central to the neo-Aristotelian trend, as well as to a range of moral particularisms and anti-theoretical positions of late 20th century and contemporary ethics. The joint task of chapters 2 and 3 is to situate the trend of reading narrative literature for the purposes of moral philosophy in the present context of moral philosophy. In the following two chapters, 4 and 5, I move on from the particularizing power of narrative literature, which is emphasized by neo-Aristotelians and particularists alike, to a broader under-standing of the intellectual potential of narrative literature. In chapter 4 I argue that narrative literature has its own forms of generalization which are enriching for our understanding of the workings of ethical generalizations in philosophy. In chapter 5 I discuss Iris Murdoch s and Martha Nussbaum s respective ways of combining ethical generality and particularity in a philosophical framework where both systematic moral theory and narrative literature are taken seriously. In chapter 6 I analyse the controversy between contemporary anti-theoretical conceptions of ethics and Nussbaum s refutation of these. I present my suggestion for how the significance of the ethics/literature discussion for moral philosophy can be understood if one wants to overcome the limitations of both Nussbaum s theory-centred, equilibrium-seeking perspective, and the anti-theorists repudiation of theory. I call my position the inclusive approach .
Resumo:
This thesis explores melodic and harmonic features of heavy metal, and while doing so, explores various methods of music analysis; their applicability and limitations regarding the study of heavy metal music. The study is built on three general hypotheses according to which 1) acoustic characteristics play a significant role for chord constructing in heavy metal, 2) heavy metal has strong ties and similarities with other Western musical styles, and 3) theories and analytical methods of Western art music may be applied to heavy metal. It seems evident that in heavy metal some chord structures appear far more frequently than others. It is suggested here that the fundamental reason for this is the use of guitar distortion effect. Subsequently, theories as to how and under what principles heavy metal is constructed need to be put under discussion; analytical models regarding the classification of consonance and dissonance and chord categorization are here revised to meet the common practices of this music. It is evident that heavy metal is not an isolated style of music; it is seen here as a cultural fusion of various musical styles. Moreover, it is suggested that the theoretical background to the construction of Western music and its analysis can offer invaluable insights to heavy metal. However, the analytical methods need to be reformed to some extent to meet the characteristics of the music. This reformation includes an accommodation of linear and functional theories that has been found rather rarely in music theory and musicology.
Resumo:
In the future the number of the disabled drivers requiring a special evaluation of their driving ability will increase due to the ageing population, as well as the progress of adaptive technology. This places pressure on the development of the driving evaluation system. Despite quite intensive research there is still no consensus concerning what is the factual situation in a driver evaluation (methodology), which measures should be included in an evaluation (methods), and how an evaluation has to be carried out (practise). In order to find answers to these questions we carried out empirical studies, and simultaneously elaborated upon a conceptual model for driving and a driving evaluation. The findings of empirical studies can be condensed into the following points: 1) A driving ability defined by the on-road driving test is associated with different laboratory measures depending on the study groups. Faults in the laboratory tests predicted faults in the on-road driving test in the novice group, whereas slowness in the laboratory predicted driving faults in the experienced drivers group. 2) The Parkinson study clearly showed that even an experienced clinician cannot reliably accomplish an evaluation of a disabled person’s driving ability without collaboration with other specialists. 3) The main finding of the stroke study was that the use of a multidisciplinary team as a source of information harmonises the specialists’ evaluations. 4) The patient studies demonstrated that the disabled persons themselves, as well as their spouses, are as a rule not reliable evaluators. 5) From the safety point of view, perceptible operations with the control devices are not crucial, but correct mental actions which the driver carries out with the help of the control devices are of greatest importance. 6) Personality factors including higher-order needs and motives, attitudes and a degree of self-awareness, particularly a sense of illness, are decisive when evaluating a disabled person’s driving ability. Personality is also the main source of resources concerning compensations for lower-order physical deficiencies and restrictions. From work with the conceptual model we drew the following methodological conclusions: First, the driver has to be considered as a holistic subject of the activity, as a multilevel hierarchically organised system of an organism, a temperament, an individuality, and a personality where the personality is the leading subsystem from the standpoint of safety. Second, driving as a human form of a sociopractical activity, is also a hierarchically organised dynamic system. Third, in an evaluation of driving ability it is a question of matching these two hierarchically organised structures: a subject of an activity and a proper activity. Fourth, an evaluation has to be person centred but not disease-, function- or method centred. On the basis of our study a multidisciplinary team (practitioner, driving school teacher, psychologist, occupational therapist) is recommended for use in demanding driver evaluations. Primary in a driver’s evaluations is a coherent conceptual model while concrete methods of evaluations may vary. However, the on-road test must always be performed if possible.
Resumo:
This dissertation examined the research-based teacher education at the University of Helsinki from different theoretical and practical perspectives. Five studies focused on these perspectives separately as well as overlappingly. Study I focused on the reflection process of graduating teacher students. The data consisted of essays the students wrote as their last assignment before graduating, where their assignment was to examine their development as researchers during their MA thesis research process. The results indicated that the teacher students had analysed their own development thoroughly during the process and that they had reflected on theoretical as well as practical educational matters. The results also pointed out that, in the students’ opinion, personally conducted research is a significant learning process. -- Study II investigated teacher students’ workplace learning and the integration of theory and practice in teacher education. The students’ interviews focused on their learning of teacher’s work prior to education. The interviewees’ responses concerning their ‘surviving’ in teaching prior to teacher education were categorized into three categories: learning through experiences, school as a teacher learning environment, and case-specific learning. The survey part of the study focused on integration of theory and practice within the education process. The results showed that the students who worked while they studied took advantage of the studies and applied them to work. They set more demanding teaching goals and reflected on their work more theoretically. -- Study III examined practical aspects of the teacher students’ MA thesis research as well as the integration of theory and practice in teacher education. The participants were surveyed using a web-based survey which dealt with the participants’ teacher education experiences. According to the results, most of the students had chosen a practical topic for their MA thesis, one arising from their work environment, and most had chosen a research topic that would develop their own teaching. The results showed that the integration of theory and practice had taken place in much of the course work, but most obviously in the practicum periods, and also in the courses concerning the school subjects. The majority felt that the education had in some way been successful with regards to integration. -- Study IV explored the idea of considering teacher students’ MA thesis research as professional development. Twenty-three teachers were interviewed on the subject of their experiences of conducting research about their own work as teachers. The results of the interviews showed that the reasons for choosing the MA thesis research topic were multiple: practical, theoretical, personal, professional reasons, as well as outside effect. The objectives of the MA thesis research, besides graduating, were actual projects, developing the ability to work as teachers, conducting significant research, and sharing knowledge of the topic. The results indicated that an MA thesis can function as a tool for professional development, for example in finding ways for adjusting teaching, increasing interaction skills, gaining knowledge or improving reflection on theory and/or practice, strengthening self-confidence as a teacher, increasing researching skills or academic writing skills, as well as becoming critical and being able to read scientific and academic literature. -- Study V analysed teachers’ views of the impact of practitioner research. According to the results, the interviewees considered the benefits of practitioner research to be many, affecting teachers, pupils, parents, the working community, and the wider society. Most of the teachers indicated that they intended to continue to conduct research in the future. The results also showed that teachers often reflected personally and collectively, and viewed this as important. -- These five studies point out that MA thesis research is and can be a useful tool for increasing reflection doing with personal and professional development, as well as integrating theory and practice. The studies suggest that more advantage could be taken of the MA thesis research project. More integration of working and studying could and should be made possible for teacher students. This could be done in various ways within teacher education, but the MA thesis should be seen as a pedagogical possibility.
Resumo:
The aim of this thesis was to develop measurement techniques and systems for measuring air quality and to provide information about air quality conditions and the amount of gaseous emissions from semi-insulated and uninsulated dairy buildings in Finland and Estonia. Specialization and intensification in livestock farming, such as in dairy production, is usually accompanied by an increase in concentrated environmental emissions. In addition to high moisture, the presence of dust and corrosive gases, and widely varying gas concentrations in dairy buildings, Finland and Estonia experience winter temperatures reaching below -40 ºC and summer temperatures above +30 ºC. The adaptation of new technologies for long-term air quality monitoring and measurement remains relatively uncommon in dairy buildings because the construction and maintenance of accurate monitoring systems for long-term use are too expensive for the average dairy farmer to afford. Though the documentation of accurate air quality measurement systems intended mainly for research purposes have been made in the past, standardised methods and the documentation of affordable systems and simple methods for performing air quality and emissions measurements in dairy buildings are unavailable. In this study, we built three measurement systems: 1) a Stationary system with integrated affordable sensors for on-site measurements, 2) a Wireless system with affordable sensors for off-site measurements, and 3) a Mobile system consisting of expensive and accurate sensors for measuring air quality. In addition to assessing existing methods, we developed simplified methods for measuring ventilation and emission rates in dairy buildings. The three measurement systems were successfully used to measure air quality in uninsulated, semi-insulated, and fully-insulated dairy buildings between the years 2005 and 2007. When carefully calibrated, the affordable sensors in the systems gave reasonably accurate readings. The spatial air quality survey showed high variation in microclimate conditions in the dairy buildings measured. The average indoor air concentration for carbon dioxide was 950 ppm, for ammonia 5 ppm, for methane 48 ppm, for relative humidity 70%, and for inside air velocity 0.2 m/s. The average winter and summer indoor temperatures during the measurement period were -7º C and +24 ºC for the uninsulated, +3 ºC and +20 ºC for the semi-insulated and +10 ºC and +25 ºC for the fully-insulated dairy buildings. The measurement results showed that the uninsulated dairy buildings had lower indoor gas concentrations and emissions compared to fully insulated buildings. Although occasionally exceeded, the ventilation rates and average indoor air quality in the dairy buildings were largely within recommended limits. We assessed the traditional heat balance, moisture balance, carbon dioxide balance and direct airflow methods for estimating ventilation rates. The direct velocity measurement for the estimation of ventilation rate proved to be impractical for naturally ventilated buildings. Two methods were developed for estimating ventilation rates. The first method is applicable in buildings in which the ventilation can be stopped or completely closed. The second method is useful in naturally ventilated buildings with large openings and high ventilation rates where spatial gas concentrations are heterogeneously distributed. The two traditional methods (carbon dioxide and methane balances), and two newly developed methods (theoretical modelling using Fick s law and boundary layer theory, and the recirculation flux-chamber technique) were used to estimate ammonia emissions from the dairy buildings. Using the traditional carbon dioxide balance method, ammonia emissions per cow from the dairy buildings ranged from 7 g day-1 to 35 g day-1, and methane emissions per cow ranged from 96 g day-1 to 348 g day-1. The developed methods proved to be as equally accurate as the traditional methods. Variation between the mean emissions estimated with the traditional and the developed methods was less than 20%. The developed modelling procedure provided sound framework for examining the impact of production systems on ammonia emissions in dairy buildings.
Resumo:
Gravitaation kvanttiteorian muotoilu on ollut teoreettisten fyysikkojen tavoitteena kvanttimekaniikan synnystä lähtien. Kvanttimekaniikan soveltaminen korkean energian ilmiöihin yleisen suhteellisuusteorian viitekehyksessä johtaa aika-avaruuden koordinaattien operatiiviseen ei-kommutoivuuteen. Ei-kommutoivia aika-avaruuden geometrioita tavataan myös avointen säikeiden säieteorioiden tietyillä matalan energian rajoilla. Ei-kommutoivan aika-avaruuden gravitaatioteoria voisi olla yhteensopiva kvanttimekaniikan kanssa ja se voisi mahdollistaa erittäin lyhyiden etäisyyksien ja korkeiden energioiden prosessien ei-lokaaliksi uskotun fysiikan kuvauksen, sekä tuottaa yleisen suhteellisuusteorian kanssa yhtenevän teorian pitkillä etäisyyksillä. Tässä työssä tarkastelen gravitaatiota Poincarén symmetrian mittakenttäteoriana ja pyrin yleistämään tämän näkemyksen ei-kommutoiviin aika-avaruuksiin. Ensin esittelen Poincarén symmetrian keskeisen roolin relativistisessa fysiikassa ja sen kuinka klassinen gravitaatioteoria johdetaan Poincarén symmetrian mittakenttäteoriana kommutoivassa aika-avaruudessa. Jatkan esittelemällä ei-kommutoivan aika-avaruuden ja kvanttikenttäteorian muotoilun ei-kommutoivassa aika-avaruudessa. Mittasymmetrioiden lokaalin luonteen vuoksi tarkastelen huolellisesti mittakenttäteorioiden muotoilua ei-kommutoivassa aika-avaruudessa. Erityistä huomiota kiinnitetään näiden teorioiden vääristyneeseen Poincarén symmetriaan, joka on ei-kommutoivan aika-avaruuden omaama uudentyyppinen kvanttisymmetria. Seuraavaksi tarkastelen ei-kommutoivan gravitaatioteorian muotoilun ongelmia ja niihin kirjallisuudessa esitettyjä ratkaisuehdotuksia. Selitän kuinka kaikissa tähänastisissa lähestymistavoissa epäonnistutaan muotoilla kovarianssi yleisten koordinaattimunnosten suhteen, joka on yleisen suhteellisuusteorian kulmakivi. Lopuksi tutkin mahdollisuutta yleistää vääristynyt Poincarén symmetria lokaaliksi mittasymmetriaksi --- gravitaation ei-kommutoivan mittakenttäteorian saavuttamisen toivossa. Osoitan, että tällaista yleistystä ei voida saavuttaa vääristämällä Poincarén symmetriaa kovariantilla twist-elementillä. Näin ollen ei-kommutoivan gravitaation ja vääristyneen Poincarén symmetrian tutkimuksessa tulee jatkossa keskittyä muihin lähestymistapoihin.
Resumo:
The efforts of combining quantum theory with general relativity have been great and marked by several successes. One field where progress has lately been made is the study of noncommutative quantum field theories that arise as a low energy limit in certain string theories. The idea of noncommutativity comes naturally when combining these two extremes and has profound implications on results widely accepted in traditional, commutative, theories. In this work I review the status of one of the most important connections in physics, the spin-statistics relation. The relation is deeply ingrained in our reality in that it gives us the structure for the periodic table and is of crucial importance for the stability of all matter. The dramatic effects of noncommutativity of space-time coordinates, mainly the loss of Lorentz invariance, call the spin-statistics relation into question. The spin-statistics theorem is first presented in its traditional setting, giving a clarifying proof starting from minimal requirements. Next the notion of noncommutativity is introduced and its implications studied. The discussion is essentially based on twisted Poincaré symmetry, the space-time symmetry of noncommutative quantum field theory. The controversial issue of microcausality in noncommutative quantum field theory is settled by showing for the first time that the light wedge microcausality condition is compatible with the twisted Poincaré symmetry. The spin-statistics relation is considered both from the point of view of braided statistics, and in the traditional Lagrangian formulation of Pauli, with the conclusion that Pauli's age-old theorem stands even this test so dramatic for the whole structure of space-time.