24 resultados para grading inflation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we examine multi-field inflationary models of the early Universe. Since non-Gaussianities may allow for the possibility to discriminate between models of inflation, we compute deviations from a Gaussian spectrum of primordial perturbations by extending the delta-N formalism. We use N-flation as a concrete model; our findings show that these models are generically indistinguishable as long as the slow roll approximation is still valid. Besides computing non-Guassinities, we also investigate Preheating after multi-field inflation. Within the framework of N-flation, we find that preheating via parametric resonance is suppressed, an indication that it is the old theory of preheating that is applicable. In addition to studying non-Gaussianities and preheatng in multi-field inflationary models, we study magnetogenesis in the early universe. To this aim, we propose a mechanism to generate primordial magnetic fields via rotating cosmic string loops. Magnetic fields in the micro-Gauss range have been observed in galaxies and clusters, but their origin has remained elusive. We consider a network of strings and find that rotating cosmic string loops, which are continuously produced in such networks, are viable candidates for magnetogenesis with relevant strength and length scales, provided we use a high string tension and an efficient dynamo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inflation is a period of accelerated expansion in the very early universe, which has the appealing aspect that it can create primordial perturbations via quantum fluctuations. These primordial perturbations have been observed in the cosmic microwave background, and these perturbations also function as the seeds of all large-scale structure in the universe. Curvaton models are simple modifications of the standard inflationary paradigm, where inflation is driven by the energy density of the inflaton, but another field, the curvaton, is responsible for producing the primordial perturbations. The curvaton decays after inflation as ended, where the isocurvature perturbations of the curvaton are converted into adiabatic perturbations. Since the curvaton must decay, it must have some interactions. Additionally realistic curvaton models typically have some self-interactions. In this work we consider self-interacting curvaton models, where the self-interaction is a monomial in the potential, suppressed by the Planck scale, and thus the self-interaction is very weak. Nevertheless, since the self-interaction makes the equations of motion non-linear, it can modify the behaviour of the model very drastically. The most intriguing aspect of this behaviour is that the final properties of the perturbations become highly dependent on the initial values. Departures of Gaussian distribution are important observables of the primordial perturbations. Due to the non-linearity of the self-interacting curvaton model and its sensitivity to initial conditions, it can produce significant non-Gaussianity of the primordial perturbations. In this work we investigate the non-Gaussianity produced by the self-interacting curvaton, and demonstrate that the non-Gaussianity parameters do not obey the analytically derived approximate relations often cited in the literature. Furthermore we also consider a self-interacting curvaton with a mass in the TeV-scale. Motivated by realistic particle physics models such as the Minimally Supersymmetric Standard Model, we demonstrate that a curvaton model within the mass range can be responsible for the observed perturbations if it can decay late enough.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study seeks to find out whether the real burden of the personal taxation has increased or decreased. In order to determine this, we investigate how the same real income has been taxed in different years. Whenever the taxes for the same real income for a given year are higher than for the base year, the real tax burden has increased. If they are lower, the real tax burden has decreased. The study thus seeks to estimate how changes in the tax regulations affect the real tax burden. It should be kept in mind that the progression in the central government income tax schedule ensures that a real change in income will bring about a change in the tax ration. In case of inflation when the tax schedules are kept nominally the same will also increase the real tax burden. In calculations of the study it is assumed that the real income remains constant, so that we can get an unbiased measure of the effects of governmental actions in real terms. The main factors influencing the amount of income taxes an individual must pay are as follows: - Gross income (income subject to central and local government taxes). - Deductions from gross income and taxes calculated according to tax schedules. - The central government income tax schedule (progressive income taxation). - The rates for the local taxes and for social security payments (proportional taxation). In the study we investigate how much a certain group of taxpayers would have paid in taxes according to the actual tax regulations prevailing indifferent years if the income were kept constant in real terms. Other factors affecting tax liability are kept strictly unchanged (as constants). The resulting taxes, expressed in fixed prices, are then compared to the taxes levied in the base year (hypothetical taxation). The question we are addressing is thus how much taxes a certain group of taxpayers with the same socioeconomic characteristics would have paid on the same real income according to the actual tax regulations prevailing in different years. This has been suggested as the main way to measure real changes in taxation, although there are several alternative measures with essentially the same aim. Next an aggregate indicator of changes in income tax rates is constructed. It is designed to show how much the taxation of income has increased or reduced from one year to next year on average. The main question remains: How aggregation over all income levels should be performed? In order to determine the average real changes in the tax scales the difference functions (difference between actual and hypothetical taxation functions) were aggregated using taxable income as weights. Besides the difference functions, the relative changes in real taxes can be used as indicators of change. In this case the ratio between the taxes computed according to the new and the old situation indicates whether the taxation has become heavier or easier. The relative changes in tax scales can be described in a way similar to that used in describing the cost of living, or by means of price indices. For example, we can use Laspeyres´ price index formula for computing the ratio between taxes determined by the new tax scales and the old tax scales. The formula answers the question: How much more or less will be paid in taxes according to the new tax scales than according to the old ones when the real income situation corresponds to the old situation. In real terms the central government tax burden experienced a steady decline from its high post-war level up until the mid-1950s. The real tax burden then drifted upwards until the mid-1970s. The real level of taxation in 1975 was twice that of 1961. In the 1980s there was a steady phase due to the inflation corrections of tax schedules. In 1989 the tax schedule fell drastically and from the mid-1990s tax schedules have decreased the real tax burden significantly. Local tax rates have risen continuously from 10 percent in 1948 to nearly 19 percent in 2008. Deductions have lowered the real tax burden especially in recent years. Aggregate figures indicate how the tax ratio for the same real income has changed over the years according to the prevailing tax regulations. We call the tax ratio calculated in this manner the real income tax ratio. A change in the real income tax ratio depicts an increase or decrease in the real tax burden. The real income tax ratio declined after the war for some years. In the beginning of the 1960s it nearly doubled to mid-1970. From mid-1990s the real income tax ratio has fallen about 35 %.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 1980s and the early 1990s have proved to be an important turning point in the history of the Nordic welfare states. After this breaking point, the Nordic social order has been built upon a new foundation. This study shows that the new order is mainly built upon new hierarchies and control mechanisms that have been developed consistently through economic and labour market policy measures. During the post-war period Nordic welfare states to an increasing extent created equality of opportunity and scope for agency among people. Public social services were available for all and the tax-benefit system maintained a level income distribution. During this golden era of Nordic welfare state, the scope for agency was, however, limited by social structures. Public institutions and law tended to categorize people according to their life circumstances ascribing them a predefined role. In the 1980s and 1990s this collectivist social order began to mature and it became subject to political renegotiation. Signs of a new social order in the Nordic countries have included the liberation of the financial markets, the privatizing of public functions and redefining the role of the public sector. It is now possible to reassess the ideological foundations of this new order. As a contrast to widely used political rhetoric, the foundation of the new order has not been the ideas of individual freedom or choice. Instead, the most important aim appears to have been to control and direct people to act in accordance with the rules of the market. The various levels of government and the social security system have been redirected to serve this goal. Instead of being a mechanism for redistributing income, the Nordic social security system has been geared towards creating new hierarchies on the Nordic labour markets. During the past decades, conditions for receiving income support and unemployment benefit have been tightened in all Nordic countries. As a consequence, people have been forced to accept deteriorating terms and conditions on the labour market. Country-specific variations exist, however: in sum Sweden has been most conservative, Denmark most innovative and Finland most radical in reforming labour market policy. The new hierarchies on the labour market have co-incided with slow or non-existent growth of real wages and with a strong growth of the share of capital income. Slow growth of real wages has kept inflation low and thus secured the value of capital. Societal development has thus progressed from equality of opportunity during the age of the welfare states towards a hierarchical social order where the majority of people face increasing constraints and where a fortunate minority enjoys prosperity and security.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mikael Juselius’ doctoral dissertation covers a range of significant issues in modern macroeconomics by empirically testing a number of important theoretical hypotheses. The first essay presents indirect evidence within the framework of the cointegrated VAR model on the elasticity of substitution between capital and labor by using Finnish manufacturing data. Instead of estimating the elasticity of substitution by using the first order conditions, he develops a new approach that utilizes a CES production function in a model with a 3-stage decision process: investment in the long run, wage bargaining in the medium run and price and employment decisions in the short run. He estimates the elasticity of substitution to be below one. The second essay tests the restrictions implied by the core equations of the New Keynesian Model (NKM) in a vector autoregressive model (VAR) by using both Euro area and U.S. data. Both the new Keynesian Phillips curve and the aggregate demand curve are estimated and tested. The restrictions implied by the core equations of the NKM are rejected on both U.S. and Euro area data. These results are important for further research. The third essay is methodologically similar to essay 2, but it concentrates on Finnish macro data by adopting a theoretical framework of an open economy. Juselius’ results suggests that the open economy NKM framework is too stylized to provide an adequate explanation for Finnish inflation. The final essay provides a macroeconometric model of Finnish inflation and associated explanatory variables and it estimates the relative importance of different inflation theories. His main finding is that Finnish inflation is primarily determined by excess demand in the product market and by changes in the long-term interest rate. This study is part of the research agenda carried out by the Research Unit of Economic Structure and Growth (RUESG). The aim of RUESG it to conduct theoretical and empirical research with respect to important issues in industrial economics, real option theory, game theory, organization theory, theory of financial systems as well as to study problems in labor markets, macroeconomics, natural resources, taxation and time series econometrics. RUESG was established at the beginning of 1995 and is one of the National Centers of Excellence in research selected by the Academy of Finland. It is financed jointly by the Academy of Finland, the University of Helsinki, the Yrjö Jahnsson Foundation, Bank of Finland and the Nokia Group. This support is gratefully acknowledged.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nanomaterials with a hexagonally ordered atomic structure, e.g., graphene, carbon and boron nitride nanotubes, and white graphene (a monolayer of hexagonal boron nitride) possess many impressive properties. For example, the mechanical stiffness and strength of these materials are unprecedented. Also, the extraordinary electronic properties of graphene and carbon nanotubes suggest that these materials may serve as building blocks of next generation electronics. However, the properties of pristine materials are not always what is needed in applications, but careful manipulation of their atomic structure, e.g., via particle irradiation can be used to tailor the properties. On the other hand, inadvertently introduced defects can deteriorate the useful properties of these materials in radiation hostile environments, such as outer space. In this thesis, defect production via energetic particle bombardment in the aforementioned materials is investigated. The effects of ion irradiation on multi-walled carbon and boron nitride nanotubes are studied experimentally by first conducting controlled irradiation treatments of the samples using an ion accelerator and subsequently characterizing the induced changes by transmission electron microscopy and Raman spectroscopy. The usefulness of the characterization methods is critically evaluated and a damage grading scale is proposed, based on transmission electron microscopy images. Theoretical predictions are made on defect production in graphene and white graphene under particle bombardment. A stochastic model based on first-principles molecular dynamics simulations is used together with electron irradiation experiments for understanding the formation of peculiar triangular defect structures in white graphene. An extensive set of classical molecular dynamics simulations is conducted, in order to study defect production under ion irradiation in graphene and white graphene. In the experimental studies the response of carbon and boron nitride multi-walled nanotubes to irradiation with a wide range of ion types, energies and fluences is explored. The stabilities of these structures under ion irradiation are investigated, as well as the issue of how the mechanism of energy transfer affects the irradiation-induced damage. An irradiation fluence of 5.5x10^15 ions/cm^2 with 40 keV Ar+ ions is established to be sufficient to amorphize a multi-walled nanotube. In the case of 350 keV He+ ion irradiation, where most of the energy transfer happens through inelastic collisions between the ion and the target electrons, an irradiation fluence of 1.4x10^17 ions/cm^2 heavily damages carbon nanotubes, whereas a larger irradiation fluence of 1.2x10^18 ions/cm^2 leaves a boron nitride nanotube in much better condition, indicating that carbon nanotubes might be more susceptible to damage via electronic excitations than their boron nitride counterparts. An elevated temperature was discovered to considerably reduce the accumulated damage created by energetic ions in both carbon and boron nitride nanotubes, attributed to enhanced defect mobility and efficient recombination at high temperatures. Additionally, cobalt nanorods encapsulated inside multi-walled carbon nanotubes were observed to transform into spherical nanoparticles after ion irradiation at an elevated temperature, which can be explained by the inverse Ostwald ripening effect. The simulation studies on ion irradiation of the hexagonal monolayers yielded quantitative estimates on types and abundances of defects produced within a large range of irradiation parameters. He, Ne, Ar, Kr, Xe, and Ga ions were considered in the simulations with kinetic energies ranging from 35 eV to 10 MeV, and the role of the angle of incidence of the ions was studied in detail. A stochastic model was developed for utilizing the large amount of data produced by the molecular dynamics simulations. It was discovered that a high degree of selectivity over the types and abundances of defects can be achieved by carefully selecting the irradiation parameters, which can be of great use when precise pattering of graphene or white graphene using focused ion beams is planned.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this dissertation is to model economic variables by a mixture autoregressive (MAR) model. The MAR model is a generalization of linear autoregressive (AR) model. The MAR -model consists of K linear autoregressive components. At any given point of time one of these autoregressive components is randomly selected to generate a new observation for the time series. The mixture probability can be constant over time or a direct function of a some observable variable. Many economic time series contain properties which cannot be described by linear and stationary time series models. A nonlinear autoregressive model such as MAR model can a plausible alternative in the case of these time series. In this dissertation the MAR model is used to model stock market bubbles and a relationship between inflation and the interest rate. In the case of the inflation rate we arrived at the MAR model where inflation process is less mean reverting in the case of high inflation than in the case of normal inflation. The interest rate move one-for-one with expected inflation. We use the data from the Livingston survey as a proxy for inflation expectations. We have found that survey inflation expectations are not perfectly rational. According to our results information stickiness play an important role in the expectation formation. We also found that survey participants have a tendency to underestimate inflation. A MAR model has also used to model stock market bubbles and crashes. This model has two regimes: the bubble regime and the error correction regime. In the error correction regime price depends on a fundamental factor, the price-dividend ratio, and in the bubble regime, price is independent of fundamentals. In this model a stock market crash is usually caused by a regime switch from a bubble regime to an error-correction regime. According to our empirical results bubbles are related to a low inflation. Our model also imply that bubbles have influences investment return distribution in both short and long run.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies the interest-rate policy of the ECB by estimating monetary policy rules using real-time data and central bank forecasts. The aim of the estimations is to try to characterize a decade of common monetary policy and to look at how different models perform at this task.The estimated rules include: contemporary Taylor rules, forward-looking Taylor rules, nonlinearrules and forecast-based rules. The nonlinear models allow for the possibility of zone-like preferences and an asymmetric response to key variables. The models therefore encompass the most popular sub-group of simple models used for policy analysis as well as the more unusual non-linear approach. In addition to the empirical work, this thesis also contains a more general discussion of monetary policy rules mostly from a New Keynesian perspective. This discussion includes an overview of some notable related studies, optimal policy, policy gradualism and several other related subjects. The regression estimations are performed with either least squares or the generalized method of moments depending on the requirements of the estimations. The estimations use data from both the Euro Area Real-Time Database and the central bank forecasts published in ECB Monthly Bulletins. These data sources represent some of the best data that is available for this kind of analysis. The main results of this thesis are that forward-looking behavior appears highly prevalent, but that standard forward-looking Taylor rules offer only ambivalent results with regard to inflation. Nonlinear models are shown to work, but on the other hand do not have a strong rationale over a simpler linear formulation. However, the forecasts appear to be highly useful in characterizing policy and may offer the most accurate depiction of a predominantly forward-looking central bank. In particular the inflation response appears much stronger while the output response becomes highly forward-looking as well.