50 resultados para Implied inflation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Genetic susceptibility to juvenile idiopathic arthritis (JIA) was studied in the genetically homogeneous Finnish population by collecting families with two or three patients affected by this disease from cases seen in the Rheumatism Foundation Hospital. The number of families ranged in different studies from 37 to 45 and the total number of patients with JIA, from among whom these cases were derived, was 2 000 to 2 300. Characteristics of the disease in affected siblings in Finland were compared with a population-based series and with a sibling series from the United States. A thorough clinical and ophthalmological examination was made of all affected patients belonging to sibpair series. Information on the occurrence of chronic rheumatic diseases in parents was collected by questionnaire and diagnoses were confirmed from hospital records. All patients, their parents and most of the healthy sibs were typed for human leukocyte antigen (HLA) alleles in loci A, C, B, DR and DQ. The HLA allele distribution of the cases was compared with corresponding data from Finnish bone marrow donors. The genetic component in JIA was found to be more significant than previously believed. A concordance rate of 25% for a disease with a population prevalence of 1 per 1000 implied a relative risk of 250 for a monozygotic (MZ) twin. An estimate for the sibling risk of an affected individual was about 15- to 20-fold. The disease was basically similar in familial and sporadic cases; the mean age at disease onset was however lower in familial cases, (4.8 years vs 7.4 years). Three sibpairs (3.4 expected) were concordant for the presence of asymptomatic uveitis. Uveitis would thus not appear to have any genetic component of its own, separate from the genetic basis of JIA. Four of the parents had JIA (0.2 cases expected), four had a type of rheumatoid factor-negative arthritis similar to that seen in juvenile patients but commencing in adulthood, and one had spondyloarthropathy (SPA). These findings provide additional support for the conception of a genetic predisposition to JIA and suggest the existence of a new disease entity, JIA of adult onset. Both the linkage analysis of the affected sibpairs and the association analysis of nuclear families provided overwhelming evidence of a major contribution of HLA to the genetic susceptibility to JIA. The association analysis in the Finnish population confirmed that the most significant associations prevailed for DRB1*0801, DQB1*0402, as expected from previous observations, and indicated the independent role of Cw*0401.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently, we live in an era characterized by the completion and first runs of the LHC accelerator at CERN, which is hoped to provide the first experimental hints of what lies beyond the Standard Model of particle physics. In addition, the last decade has witnessed a new dawn of cosmology, where it has truly emerged as a precision science. Largely due to the WMAP measurements of the cosmic microwave background, we now believe to have quantitative control of much of the history of our universe. These two experimental windows offer us not only an unprecedented view of the smallest and largest structures of the universe, but also a glimpse at the very first moments in its history. At the same time, they require the theorists to focus on the fundamental challenges awaiting at the boundary of high energy particle physics and cosmology. What were the contents and properties of matter in the early universe? How is one to describe its interactions? What kind of implications do the various models of physics beyond the Standard Model have on the subsequent evolution of the universe? In this thesis, we explore the connection between in particular supersymmetric theories and the evolution of the early universe. First, we provide the reader with a general introduction to modern day particle cosmology from two angles: on one hand by reviewing our current knowledge of the history of the early universe, and on the other hand by introducing the basics of supersymmetry and its derivatives. Subsequently, with the help of the developed tools, we direct the attention to the specific questions addressed in the three original articles that form the main scientific contents of the thesis. Each of these papers concerns a distinct cosmological problem, ranging from the generation of the matter-antimatter asymmetry to inflation, and finally to the origin or very early stage of the universe. They nevertheless share a common factor in their use of the machinery of supersymmetric theories to address open questions in the corresponding cosmological models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we consider the phenomenology of supergravity, and in particular the particle called "gravitino". We begin with an introductory part, where we discuss the theories of inflation, supersymmetry and supergravity. Gravitino production is then investigated into details, by considering the research papers here included. First we study the scattering of massive W bosons in the thermal bath of particles, during the period of reheating. We show that the process generates in the cross section non trivial contributions, which eventually lead to unitarity breaking above a certain scale. This happens because, in the annihilation diagram, the longitudinal degrees of freedom in the propagator of the gauge bosons disappear from the amplitude, by virtue of the supergravity vertex. Accordingly, the longitudinal polarizations of the on-shell W become strongly interacting in the high energy limit. By studying the process with both gauge and mass eigenstates, it is shown that the inclusion of diagrams with off-shell scalars of the MSSM does not cancel the divergences. Next, we approach cosmology more closely, and study the decay of a scalar field S into gravitinos at the end of inflation. Once its mass is comparable to the Hubble rate, the field starts coherent oscillations about the minimum of its potential and decays pertubatively. We embed S in a model of gauge mediation with metastable vacua, where the hidden sector is of the O'Raifeartaigh type. First we discuss the dynamics of the field in the expanding background, then radiative corrections to the scalar potential V(S) and to the Kähler potential are calculated. Constraints on the reheating temperature are accordingly obtained, by demanding that the gravitinos thus produced provide with the observed Dark Matter density. We modify consistently former results in the literature, and find that the gravitino number density and T_R are extremely sensitive to the parameters of the model. This means that it is easy to account for gravitino Dark Matter with an arbitrarily low reheating temperature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we examine multi-field inflationary models of the early Universe. Since non-Gaussianities may allow for the possibility to discriminate between models of inflation, we compute deviations from a Gaussian spectrum of primordial perturbations by extending the delta-N formalism. We use N-flation as a concrete model; our findings show that these models are generically indistinguishable as long as the slow roll approximation is still valid. Besides computing non-Guassinities, we also investigate Preheating after multi-field inflation. Within the framework of N-flation, we find that preheating via parametric resonance is suppressed, an indication that it is the old theory of preheating that is applicable. In addition to studying non-Gaussianities and preheatng in multi-field inflationary models, we study magnetogenesis in the early universe. To this aim, we propose a mechanism to generate primordial magnetic fields via rotating cosmic string loops. Magnetic fields in the micro-Gauss range have been observed in galaxies and clusters, but their origin has remained elusive. We consider a network of strings and find that rotating cosmic string loops, which are continuously produced in such networks, are viable candidates for magnetogenesis with relevant strength and length scales, provided we use a high string tension and an efficient dynamo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increase in drug use and related harms in the late 1990s in Finland has come to be referred to as the second drug wave. In addition to using criminal justice as a basis of drug policy, new kinds of drug regulation were introduced. Some of the new regulation strategies were referred to as "harm reduction". The most widely known practices of harm reduction include needle and syringe exchange programmes for intravenous drug users and medicinal substitution and maintenance treatment programmes for opiate users. The purpose of the study is to examine the change of drug policy in Finland and particularly the political struggle surrounding harm reduction in the context of this change. The aim is, first, to analyse the content of harm reduction policy and the dynamics of its emergence and, second, to assess to what extent harm reduction undermines or threatens traditional drug policy. The concept of harm reduction is typically associated with a drug policy strategy that employs the public health approach and where the principal focus of regulation is on drug-related health harms and risks. On the other hand, harm reduction policy has also been given other interpretations, relating, in particular, to human rights and social equality. In Finland, harm reduction can also be seen to have its roots in criminal policy. The general conclusion of the study is that rather than posing a threat to a prohibitionist drug policy, harm reduction has come to form part of it. The implementation of harm reduction by setting up health counselling centres for drug users with the main focus on needle exchange and by extending substitution treatment has implied the creation of specialised services based on medical expertise and an increasing involvement of the medical profession in addressing drug problems. At the same time the criminal justice control of drug use has been intensified. Accordingly, harm reduction has not entailed a shift to a more liberal drug policy nor has it undermined the traditional policy with its emphasis on total drug prohibition. Instead, harm reduction in combination with a prohibitionist penal policy constitutes a new dual-track drug policy paradigm. The study draws on the constructionist tradition of research on social problems and movements, where the analysis centres on claims made about social problems, claim-makers, ways of making claims and related social mobilisation. The research material mainly consists of administrative documents and interviews with key stakeholders. The doctoral study consists of five original articles and a summary article. The first article gives an overview of the strained process of change of drug policy and policy trends around the turn of the millennium. The second article focuses on the concept of harm reduction and the international organisations and groupings involved in defining it. The third article describes the process that in 1996 97 led to the creation of the first Finnish national drug policy strategy by reconciling mutually contradictory views of addressing the drug problem, at the same as the way was paved for harm reduction measures. The fourth article seeks to explain the relatively rapid diffusion of needle exchange programmes after 1996. The fifth article assesses substitution treatment as a harm reduction measure from the viewpoint of the associations of opioid users and their family members.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inflation is a period of accelerated expansion in the very early universe, which has the appealing aspect that it can create primordial perturbations via quantum fluctuations. These primordial perturbations have been observed in the cosmic microwave background, and these perturbations also function as the seeds of all large-scale structure in the universe. Curvaton models are simple modifications of the standard inflationary paradigm, where inflation is driven by the energy density of the inflaton, but another field, the curvaton, is responsible for producing the primordial perturbations. The curvaton decays after inflation as ended, where the isocurvature perturbations of the curvaton are converted into adiabatic perturbations. Since the curvaton must decay, it must have some interactions. Additionally realistic curvaton models typically have some self-interactions. In this work we consider self-interacting curvaton models, where the self-interaction is a monomial in the potential, suppressed by the Planck scale, and thus the self-interaction is very weak. Nevertheless, since the self-interaction makes the equations of motion non-linear, it can modify the behaviour of the model very drastically. The most intriguing aspect of this behaviour is that the final properties of the perturbations become highly dependent on the initial values. Departures of Gaussian distribution are important observables of the primordial perturbations. Due to the non-linearity of the self-interacting curvaton model and its sensitivity to initial conditions, it can produce significant non-Gaussianity of the primordial perturbations. In this work we investigate the non-Gaussianity produced by the self-interacting curvaton, and demonstrate that the non-Gaussianity parameters do not obey the analytically derived approximate relations often cited in the literature. Furthermore we also consider a self-interacting curvaton with a mass in the TeV-scale. Motivated by realistic particle physics models such as the Minimally Supersymmetric Standard Model, we demonstrate that a curvaton model within the mass range can be responsible for the observed perturbations if it can decay late enough.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study seeks to find out whether the real burden of the personal taxation has increased or decreased. In order to determine this, we investigate how the same real income has been taxed in different years. Whenever the taxes for the same real income for a given year are higher than for the base year, the real tax burden has increased. If they are lower, the real tax burden has decreased. The study thus seeks to estimate how changes in the tax regulations affect the real tax burden. It should be kept in mind that the progression in the central government income tax schedule ensures that a real change in income will bring about a change in the tax ration. In case of inflation when the tax schedules are kept nominally the same will also increase the real tax burden. In calculations of the study it is assumed that the real income remains constant, so that we can get an unbiased measure of the effects of governmental actions in real terms. The main factors influencing the amount of income taxes an individual must pay are as follows: - Gross income (income subject to central and local government taxes). - Deductions from gross income and taxes calculated according to tax schedules. - The central government income tax schedule (progressive income taxation). - The rates for the local taxes and for social security payments (proportional taxation). In the study we investigate how much a certain group of taxpayers would have paid in taxes according to the actual tax regulations prevailing indifferent years if the income were kept constant in real terms. Other factors affecting tax liability are kept strictly unchanged (as constants). The resulting taxes, expressed in fixed prices, are then compared to the taxes levied in the base year (hypothetical taxation). The question we are addressing is thus how much taxes a certain group of taxpayers with the same socioeconomic characteristics would have paid on the same real income according to the actual tax regulations prevailing in different years. This has been suggested as the main way to measure real changes in taxation, although there are several alternative measures with essentially the same aim. Next an aggregate indicator of changes in income tax rates is constructed. It is designed to show how much the taxation of income has increased or reduced from one year to next year on average. The main question remains: How aggregation over all income levels should be performed? In order to determine the average real changes in the tax scales the difference functions (difference between actual and hypothetical taxation functions) were aggregated using taxable income as weights. Besides the difference functions, the relative changes in real taxes can be used as indicators of change. In this case the ratio between the taxes computed according to the new and the old situation indicates whether the taxation has become heavier or easier. The relative changes in tax scales can be described in a way similar to that used in describing the cost of living, or by means of price indices. For example, we can use Laspeyres´ price index formula for computing the ratio between taxes determined by the new tax scales and the old tax scales. The formula answers the question: How much more or less will be paid in taxes according to the new tax scales than according to the old ones when the real income situation corresponds to the old situation. In real terms the central government tax burden experienced a steady decline from its high post-war level up until the mid-1950s. The real tax burden then drifted upwards until the mid-1970s. The real level of taxation in 1975 was twice that of 1961. In the 1980s there was a steady phase due to the inflation corrections of tax schedules. In 1989 the tax schedule fell drastically and from the mid-1990s tax schedules have decreased the real tax burden significantly. Local tax rates have risen continuously from 10 percent in 1948 to nearly 19 percent in 2008. Deductions have lowered the real tax burden especially in recent years. Aggregate figures indicate how the tax ratio for the same real income has changed over the years according to the prevailing tax regulations. We call the tax ratio calculated in this manner the real income tax ratio. A change in the real income tax ratio depicts an increase or decrease in the real tax burden. The real income tax ratio declined after the war for some years. In the beginning of the 1960s it nearly doubled to mid-1970. From mid-1990s the real income tax ratio has fallen about 35 %.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 1980s and the early 1990s have proved to be an important turning point in the history of the Nordic welfare states. After this breaking point, the Nordic social order has been built upon a new foundation. This study shows that the new order is mainly built upon new hierarchies and control mechanisms that have been developed consistently through economic and labour market policy measures. During the post-war period Nordic welfare states to an increasing extent created equality of opportunity and scope for agency among people. Public social services were available for all and the tax-benefit system maintained a level income distribution. During this golden era of Nordic welfare state, the scope for agency was, however, limited by social structures. Public institutions and law tended to categorize people according to their life circumstances ascribing them a predefined role. In the 1980s and 1990s this collectivist social order began to mature and it became subject to political renegotiation. Signs of a new social order in the Nordic countries have included the liberation of the financial markets, the privatizing of public functions and redefining the role of the public sector. It is now possible to reassess the ideological foundations of this new order. As a contrast to widely used political rhetoric, the foundation of the new order has not been the ideas of individual freedom or choice. Instead, the most important aim appears to have been to control and direct people to act in accordance with the rules of the market. The various levels of government and the social security system have been redirected to serve this goal. Instead of being a mechanism for redistributing income, the Nordic social security system has been geared towards creating new hierarchies on the Nordic labour markets. During the past decades, conditions for receiving income support and unemployment benefit have been tightened in all Nordic countries. As a consequence, people have been forced to accept deteriorating terms and conditions on the labour market. Country-specific variations exist, however: in sum Sweden has been most conservative, Denmark most innovative and Finland most radical in reforming labour market policy. The new hierarchies on the labour market have co-incided with slow or non-existent growth of real wages and with a strong growth of the share of capital income. Slow growth of real wages has kept inflation low and thus secured the value of capital. Societal development has thus progressed from equality of opportunity during the age of the welfare states towards a hierarchical social order where the majority of people face increasing constraints and where a fortunate minority enjoys prosperity and security.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines empirically the effect firm reputation has on the determinants of debt maturity. Utilising data from European primary bond market between 1999 and 2005, I find that the maturity choice of issuers with a higher reputation is less sensitive to macroeconomic conditions, market credit risk-premiums, prevailing firm credit quality and size of the debt issue. The annualised coupon payments are shown to be a significant factor in determining the debt maturity and reveal a monotonously increasing relationship between credit quality and debt maturity once controlled for. Finally, I show that issuers lacking a credit rating have an implied credit quality positioned between investment-grade and speculative-grade debt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of different time units in option pricing may lead to inconsistent estimates of time decay and spurious jumps in implied volatilities. Different time units in the pricing model leads to different implied volatilities although the option price itself is the same.The chosen time unit should make it necessary to adjust the volatility parameter only when there are some fundamental reasons for it and not due to wrong specifications of the model. This paper examined the effects of option pricing using different time hypotheses and empirically investigated which time frame the option markets in Germany employ over weekdays. The paper specifically tries to get a picture of how the market prices options. The results seem to verify that the German market behaves in a fashion that deviates from the most traditional time units in option pricing, calendar and trading days. The study also showed that the implied volatility of Thursdays was somewhat higher and thus differed from the pattern of other days of the week. Using a GARCH model to further investigate the effect showed that although a traditional tests, like the analysis of variance, indicated a negative return for Thursday during the same period as the implied volatilities used, this was not supported using a GARCH model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this paper is to investigate and model the characteristics of the prevailing volatility smiles and surfaces on the DAX- and ESX-index options markets. Continuing on the trend of Implied Volatility Functions, the Standardized Log-Moneyness model is introduced and fitted to historical data. The model replaces the constant volatility parameter of the Black & Scholes pricing model with a matrix of volatilities with respect to moneyness and maturity and is tested out-of-sample. Considering the dynamics, the results show support for the hypotheses put forward in this study, implying that the smile increases in magnitude when maturity and ATM volatility decreases and that there is a negative/positive correlation between a change in the underlying asset/time to maturity and implied ATM volatility. Further, the Standardized Log-Moneyness model indicates an improvement to pricing accuracy compared to previous Implied Volatility Function models, however indicating that the parameters of the models are to be re-estimated continuously for the models to fully capture the changing dynamics of the volatility smiles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this paper is to suggest a method that accounts for the impact of the volatility smile dynamics when performing scenario analysis for a portfolio consisting of vanilla options. As the volatility smile is documented to change at least with the level of implied at-the-money volatility, a suitable model is here included in the calculation process of the simulated market scenarios. By constructing simple portfolios of index options and comparing the ex ante risk exposure measured using different pricing methods to realized market values, ex post, the improvements of the incorporation of the model are monitored. The analyzed examples in the study generate results that statistically support that the most accurate scenarios are those calculated using the model accounting for the dynamics of the smile. Thus, we show that the differences emanating from the volatility smile are apparent and should be accounted for and that the methodology presented herein is one suitable alternative for doing so.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The multifaceted passive present participle in Finnish This study investigates the uses of the passive present participle in Finnish. The participle occurs in a variety of syntactic environments and exhibits a rich polysemy. Former descriptions have treated it as a mainly modal element, but it has several non-modal uses as well. The present study provides an overview of its uses and meanings, with the main focus on the factors which trigger the modal reading. In addition, the study contains two case studies on modal periphrastic constructions consisting of the verb 'to be' and the present passive participle, the Obligation construction, e.g., on men-tä-vä [is go-pass-ptc], and the Possiblity construction, e.g., on pelaste-tta-v-i-ssa [is save-pass-ptc-pl-ine]. The study is based on empirical data of 9000 sentences obtained from i) large collections of transcribed material from Finnish dialects, ii) a corpus of modern Finnish newspaper texts, iii) corpora of Old Finnish texts. Both in colloquial and standard Finnish the reading of the participle is highly dependent of the context and determined by such factors as the overall syntactic environment and other co-occurring elements. One of the main findings here is that the Finnish passive present participle is not modal per se. The contextual modal reading arises whenever the state of affairs is conceptualized from the viewpoint of the implied subject of the participle, and the meaning of possibility or obligation depends mostly on whether the situation is pleasant or undesirable. In sections examining the grammaticalization of the Possibility and Obligation constructions, the perspective is diachronic. Both constructions have derived from copula constructions with the passive present participle as a predicate (adjective or adverb). These sections show how a linguistic change can be investigated on the basis of the patterns of usage in the empirical data. The Possibility construction is currently going through a restructuration to a passive verbal complex. The source of this construction is reflected in its present-day use by the fact that it heavily biased towards a small set of verbs. The Obligation construction has grammaticalized to a construction comparable to a compound tense. Patterns of use of the construction show that grammaticalization originates in specific syntactic constructions with an implication of practical necessity. Furthermore, it is shown that the Obligation construction has grammaticalized in different directions in standard and colloquial Finnish. Differing from the study on most typical phenomena investigated in the literature on grammaticalization of modality, the present study opens new perspectives and methods for discussion on these questions.