44 resultados para Cosmological fluctuations

em Helda - Digital Repository of University of Helsinki


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis consists of four research papers and an introduction providing some background. The structure in the universe is generally considered to originate from quantum fluctuations in the very early universe. The standard lore of cosmology states that the primordial perturbations are almost scale-invariant, adiabatic, and Gaussian. A snapshot of the structure from the time when the universe became transparent can be seen in the cosmic microwave background (CMB). For a long time mainly the power spectrum of the CMB temperature fluctuations has been used to obtain observational constraints, especially on deviations from scale-invariance and pure adiabacity. Non-Gaussian perturbations provide a novel and very promising way to test theoretical predictions. They probe beyond the power spectrum, or two point correlator, since non-Gaussianity involves higher order statistics. The thesis concentrates on the non-Gaussian perturbations arising in several situations involving two scalar fields, namely, hybrid inflation and various forms of preheating. First we go through some basic concepts -- such as the cosmological inflation, reheating and preheating, and the role of scalar fields during inflation -- which are necessary for the understanding of the research papers. We also review the standard linear cosmological perturbation theory. The second order perturbation theory formalism for two scalar fields is developed. We explain what is meant by non-Gaussian perturbations, and discuss some difficulties in parametrisation and observation. In particular, we concentrate on the nonlinearity parameter. The prospects of observing non-Gaussianity are briefly discussed. We apply the formalism and calculate the evolution of the second order curvature perturbation during hybrid inflation. We estimate the amount of non-Gaussianity in the model and find that there is a possibility for an observational effect. The non-Gaussianity arising in preheating is also studied. We find that the level produced by the simplest model of instant preheating is insignificant, whereas standard preheating with parametric resonance as well as tachyonic preheating are prone to easily saturate and even exceed the observational limits. We also mention other approaches to the study of primordial non-Gaussianities, which differ from the perturbation theory method chosen in the thesis work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies empirically whether measurement errors in aggregate production statistics affect sentiment and future output. Initial announcements of aggregate production are subject to measurement error, because many of the data required to compile the statistics are produced with a lag. This measurement error can be gauged as the difference between the latest revised statistic and its initial announcement. Assuming aggregate production statistics help forecast future aggregate production, these measurement errors are expected to affect macroeconomic forecasts. Assuming agents’ macroeconomic forecasts affect their production choices, these measurement errors should affect future output through sentiment. This thesis is primarily empirical, so the theoretical basis, strategic complementarity, is discussed quite briefly. However, it is a model in which higher aggregate production increases each agent’s incentive to produce. In this circumstance a statistical announcement which suggests aggregate production is high would increase each agent’s incentive to produce, thus resulting in higher aggregate production. In this way the existence of strategic complementarity provides the theoretical basis for output fluctuations caused by measurement mistakes in aggregate production statistics. Previous empirical studies suggest that measurement errors in gross national product affect future aggregate production in the United States. Additionally it has been demonstrated that measurement errors in the Index of Leading Indicators affect forecasts by professional economists as well as future industrial production in the United States. This thesis aims to verify the applicability of these findings to other countries, as well as study the link between measurement errors in gross domestic product and sentiment. This thesis explores the relationship between measurement errors in gross domestic production and sentiment and future output. Professional forecasts and consumer sentiment in the United States and Finland, as well as producer sentiment in Finland, are used as the measures of sentiment. Using statistical techniques it is found that measurement errors in gross domestic product affect forecasts and producer sentiment. The effect on consumer sentiment is ambiguous. The relationship between measurement errors and future output is explored using data from Finland, United States, United Kingdom, New Zealand and Sweden. It is found that measurement errors have affected aggregate production or investment in Finland, United States, United Kingdom and Sweden. Specifically, it was found that overly optimistic statistics announcements are associated with higher output and vice versa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation consists of an introductory chapter and three essays that apply search-matching theory to study the interaction of labor market frictions, technological change and macroeconomic fluctuations. The first essay studies the impact of capital-embodied growth on equilibrium unemployment by extending a vintage capital/search model to incorporate vintage human capital. In addition to the capital obsolescence (or creative destruction) effect that tends to raise unemployment, vintage human capital introduces a skill obsolescence effect of faster growth that has the opposite sign. Faster skill obsolescence reduces the value of unemployment, hence wages and leads to more job creation and less job destruction, unambiguously reducing unemployment. The second essay studies the effect of skill biased technological change on skill mismatch and the allocation of workers and firms in the labor market. By allowing workers to invest in education, we extend a matching model with two-sided heterogeneity to incorporate an endogenous distribution of high and low skill workers. We consider various possibilities for the cost of acquiring skills and show that while unemployment increases in most scenarios, the effect on the distribution of vacancy and worker types varies according to the structure of skill costs. When the model is extended to incorporate endogenous labor market participation, we show that the unemployment rate becomes less informative of the state of the labor market as the participation margin absorbs employment effects. The third essay studies the effects of labor taxes on equilibrium labor market outcomes and macroeconomic dynamics in a New Keynesian model with matching frictions. Three policy instruments are considered: a marginal tax and a tax subsidy to produce tax progression schemes, and a replacement ratio to account for variability in outside options. In equilibrium, the marginal tax rate and replacement ratio dampen economic activity whereas tax subsidies boost the economy. The marginal tax rate and replacement ratio amplify shock responses whereas employment subsidies weaken them. The tax instruments affect the degree to which the wage absorbs shocks. We show that increasing tax progression when taxation is initially progressive is harmful for steady state employment and output, and amplifies the sensitivity of macroeconomic variables to shocks. When taxation is initially proportional, increasing progression is beneficial for output and employment and dampens shock responses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cosmological observations of light from type Ia supernovae, the cosmic microwave background and the galaxy distribution seem to indicate that the expansion of the universe has accelerated during the latter half of its age. Within standard cosmology, this is ascribed to dark energy, a uniform fluid with large negative pressure that gives rise to repulsive gravity but also entails serious theoretical problems. Understanding the physical origin of the perceived accelerated expansion has been described as one of the greatest challenges in theoretical physics today. In this thesis, we discuss the possibility that, instead of dark energy, the acceleration would be caused by an effect of the nonlinear structure formation on light, ignored in the standard cosmology. A physical interpretation of the effect goes as follows: due to the clustering of the initially smooth matter with time as filaments of opaque galaxies, the regions where the detectable light travels get emptier and emptier relative to the average. As the developing voids begin to expand the faster the lower their matter density becomes, the expansion can then accelerate along our line of sight without local acceleration, potentially obviating the need for the mysterious dark energy. In addition to offering a natural physical interpretation to the acceleration, we have further shown that an inhomogeneous model is able to match the main cosmological observations without dark energy, resulting in a concordant picture of the universe with 90% dark matter, 10% baryonic matter and 15 billion years as the age of the universe. The model also provides a smart solution to the coincidence problem: if induced by the voids, the onset of the perceived acceleration naturally coincides with the formation of the voids. Additional future tests include quantitative predictions for angular deviations and a theoretical derivation of the model to reduce the required phenomenology. A spin-off of the research is a physical classification of the cosmic inhomogeneities according to how they could induce accelerated expansion along our line of sight. We have identified three physically distinct mechanisms: global acceleration due to spatial variations in the expansion rate, faster local expansion rate due to a large local void and biased light propagation through voids that expand faster than the average. A general conclusion is that the physical properties crucial to account for the perceived acceleration are the growth of the inhomogeneities and the inhomogeneities in the expansion rate. The existence of these properties in the real universe is supported by both observational data and theoretical calculations. However, better data and more sophisticated theoretical models are required to vindicate or disprove the conjecture that the inhomogeneities are responsible for the acceleration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we consider the phenomenology of supergravity, and in particular the particle called "gravitino". We begin with an introductory part, where we discuss the theories of inflation, supersymmetry and supergravity. Gravitino production is then investigated into details, by considering the research papers here included. First we study the scattering of massive W bosons in the thermal bath of particles, during the period of reheating. We show that the process generates in the cross section non trivial contributions, which eventually lead to unitarity breaking above a certain scale. This happens because, in the annihilation diagram, the longitudinal degrees of freedom in the propagator of the gauge bosons disappear from the amplitude, by virtue of the supergravity vertex. Accordingly, the longitudinal polarizations of the on-shell W become strongly interacting in the high energy limit. By studying the process with both gauge and mass eigenstates, it is shown that the inclusion of diagrams with off-shell scalars of the MSSM does not cancel the divergences. Next, we approach cosmology more closely, and study the decay of a scalar field S into gravitinos at the end of inflation. Once its mass is comparable to the Hubble rate, the field starts coherent oscillations about the minimum of its potential and decays pertubatively. We embed S in a model of gauge mediation with metastable vacua, where the hidden sector is of the O'Raifeartaigh type. First we discuss the dynamics of the field in the expanding background, then radiative corrections to the scalar potential V(S) and to the Kähler potential are calculated. Constraints on the reheating temperature are accordingly obtained, by demanding that the gravitinos thus produced provide with the observed Dark Matter density. We modify consistently former results in the literature, and find that the gravitino number density and T_R are extremely sensitive to the parameters of the model. This means that it is easy to account for gravitino Dark Matter with an arbitrarily low reheating temperature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, I look into a grammatical phenomenon found among speakers of the Cambridgeshire dialect of English. According to my hypothesis, the phenomenon is a new entry into the past BE verb paradigm in the English language. In my paper, I claim that the structure I have found complements the existing two verb forms, was and were, with a third verb form that I have labelled ‘intermediate past BE’. The paper is divided into two parts. In the first section, I introduce the theoretical ground for the study of variation, which is founded on empiricist principles. In variationist linguistics, the main claim is that heterogeneous language use is structured and ordered. In the last 50 years of history in modern linguistics, this claim is controversial. In the 1960s, the generativist movement spearheaded by Noam Chomsky diverted attention away from grammatical theories that are based on empirical observations. The generativists steered away from language diversity, variation and change in favour of generalisations, abstractions and universalist claims. The theoretical part of my paper goes through the main points of the variationist agenda and concludes that abandoning the concept of language variation in linguistics is harmful for both theory and methodology. In the method part of the paper, I present the Helsinki Archive of Regional English Speech (HARES) corpus. It is an audio archive that contains interviews conducted in England in the 1970s and 1980s. The interviews were done in accordance to methods used generally in traditional dialectology. The informants are mostly elderly male people who have lived in the same region throughout their lives and who have left school at an early age. The interviews are actually conversations: the interviewer allowed the informant to pick the topic of conversation to induce a maximally relaxed and comfortable atmosphere and thus allow the most natural dialect variant to emerge in the informant’s speech. In the paper, the corpus chapter introduces some of the transcription and annotation problems associated with spoken language corpora (especially those containing dialectal speech). Questions surrounding the concept of variation are present in this part of the paper too, as especially transcription work is troubled by the fundamental problem of having to describe the fluctuations of everyday speech in text. In the empirical section of the paper, I use HARES to analyse the speech of four informants, with special focus on the emergence of the intermediate past BE variant. My observations and the subsequent analysis permit me to claim that my hypothesis seems to hold. The intermediate variant occupies almost all contexts where one would expect was or were in the informants’ speech. This means that the new variant is integrated into the speakers’ grammars and exemplifies the kind of variation that is at the heart of this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis addresses the problem of Finnish Iron Age bells, pellet bells and bell pendants, previously unexplored musical artefacts from 400–1300 AD. The study, which contributes to the field of music archaeology, aims to provide a gateway to ancient soundworlds and ideas of music making. The research questions include: Where did these metal artefacts come from? How did they sound? How were they used? What did their sound mean to the people of the Iron Age? The data collected at the National Museum of Finland and at several provincial museums covers a total of 486 bells, pellet bells and bell pendants. By means of a cluster analysis, each category was divided into several subgroups. The subgroups, which all seem to have a different dating and geographical distribution, represent a spread of both local and international manufacturing traditions. According to an elemental analysis, the material varies from iron to copper-tin, copper-lead and copper-tin-lead alloys. Clappers, pellets and pebbles prove that the bells and pellet bells were indisputably instruments intended for sound production. Clusters of small bell pendants, however, probably produced sound by jingling against each other. Spectrogram plots reveal that the partials of the still audible sounds range from 1 000 to 19 850 Hz. On the basis of 129 inhumation graves, hoards, barrows and stray finds, it seems evident that the bells, pellet bells and bell pendants were fastened to dresses and horse harnesses or carried in pouches and boxes. The resulting acoustic spaces could have been employed in constructing social hierarchies, since the instruments usually appear in richly furnished graves. Furthermore, the instruments repeatedly occur with crosses, edge tools and zoomorphic pendants that in the later Finnish-Karelian culture were regarded as prophylactic amulets. In the Iron Age as well as in later folk culture, the bell sounds seem to have expressed territorial, social and cosmological boundaries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A small fraction of the energy absorbed in the light reactions of photosynthesis is re-emitted as chlorophyll-a fluorescence. Chlorophyll-a fluorescence and photochemistry compete for excitation energy in photosystem II (PSII). Therefore, changes in the photochemical capacity can be detected through analysis of chlorophyll fluorescence. Chlorophyll fluorescence techniques have been widely used to follow the diurnal (fast), and the seasonal (slow) acclimation in the energy partitioning between photochemical and non-photochemical processes in PSII. Energy partitioning in PSII estimated through chlorophyll fluorescence can be used as a proxy of the plant physiological status, and measured at different spatial and temporal scales. However, a number of technical and theoretical limitations still limit the use of chlorophyll fluorescence data for the study of the acclimation of PSII. The aim of this Thesis was to study the diurnal and seasonal acclimation of PSII in field conditions through the development and testing of new chlorophyll fluorescence-based tools, overcoming these limitations. A new model capable of following the fast acclimation of PSII to rapid fluctuations in light intensity was developed. The model was used to study the rapid acclimation in the electron transport rate under fluctuating light. Additionally, new chlorophyll fluorescence parameters were developed for estimating the seasonal acclimation in the sustained rate constant of thermal energy dissipation and photochemistry. The parameters were used to quantitatively evaluate the effect of light and temperature on the seasonal acclimation of PSII. The results indicated that light environment not only affected the degree but also the kinetics of response of the acclimation to temperature, which was attributed to differences in the structural organization of PSII during seasonal acclimation. Furthermore, zeaxanthin-facilitated thermal dissipation appeared to be the main mechanisms modulating the fraction of absorbed energy being dissipated thermally during winter in field Scots pine. Finally, the integration between diurnal and seasonal acclimation mechanisms was studied using a recently developed instrument MONI-PAM (Walz GmbH, Germany) capable of continuously monitoring the energy partitioning in PSII.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

World marine fisheries suffer from economic and biological overfishing: too many vessels are harvesting too few fish stocks. Fisheries economics has explained the causes of overfishing and provided a theoretical background for management systems capable of solving the problem. Yet only a few examples of fisheries managed by the principles of the bioeconomic theory exist. With the aim of bridging the gap between the actual fish stock assessment models used to provide management advice and economic optimisation models, the thesis explores economically sound harvesting from national and international perspectives. Using data calibrated for the Baltic salmon and herring stocks, optimal harvesting policies are outlined using numerical methods. First, the thesis focuses on the socially optimal harvest of a single salmon stock by commercial and recreational fisheries. The results obtained using dynamic programming show that the optimal fishery configuration would be to close down three out of the five studied fisheries. The result is robust to stock size fluctuations. Compared to a base case situation, the optimal fleet structure would yield a slight decrease in the commercial catch, but a recreational catch that is nearly seven times higher. As a result, the expected economic net benefits from the fishery would increase nearly 60%, and the expected number of juvenile salmon (smolt) would increase by 30%. Second, the thesis explores the management of multiple salmon stocks in an international framework. Non-cooperative and cooperative game theory are used to demonstrate different "what if" scenarios. The results of the four player game suggest that, despite the commonly agreed fishing quota, the behaviour of the countries has been closer to non-cooperation than cooperation. Cooperation would more than double the net benefits from the fishery compared to a past fisheries policy. Side payments, however, are a prerequisite for a cooperative solution. Third, the thesis applies coalitional games in the partition function form to study whether the cooperative solution would be stable despite the potential presence of positive externalities. The results show that the cooperation of two out of four studied countries can be stable. Compared to a past fisheries policy, a stable coalition structure would provide substantial economic benefits. Nevertheless, the status of the salmon stocks would not improve significantly. Fourth, the thesis studies the prerequisites for and potential consequences of the implementation of an individual transferable quota (ITQ) system in the Finnish herring fishery. Simulation results suggest that ITQs would result in a decrease in the number of fishing vessels, but enables positive profits to overlap with a higher stock size. The empirical findings of the thesis affirm that the profitability of the studied fisheries could be improved. The evidence, however, indicates that incentives for free riding exist, and thus the most preferable outcome both in economic and biological terms is elusive.