31 resultados para Inflation, Near-Money, Welfare Cost.
Resumo:
The dissertation deals with remote narrowband measurements of the electromagnetic radiation emitted by lightning flashes. A lightning flash consists of a number of sub-processes. The return stroke, which transfers electrical charge from the thundercloud to to the ground, is electromagnetically an impulsive wideband process; that is, it emits radiation at most frequencies in the electromagnetic spectrum, but its duration is only some tens of microseconds. Before and after the return stroke, multiple sub-processes redistribute electrical charges within the thundercloud. These sub-processes can last for tens to hundreds of milliseconds, many orders of magnitude longer than the return stroke. Each sub-process causes radiation with specific time-domain characteristics, having maxima at different frequencies. Thus, if the radiation is measured at a single narrow frequency band, it is difficult to identify the sub-processes, and some sub-processes can be missed altogether. However, narrowband detectors are simple to design and miniaturize. In particular, near the High Frequency band (High Frequency, 3 MHz to 30 MHz), ordinary shortwave radios can, in principle, be used as detectors. This dissertation utilizes a prototype detector which is essentially a handheld AM radio receiver. Measurements were made in Scandinavia, and several independent data sources were used to identify lightning sub-processes, as well as the distance to each individual flash. It is shown that multiple sub-processes radiate strongly near the HF band. The return stroke usually radiates intensely, but it cannot be reliably identified from the time-domain signal alone. This means that a narrowband measurement is best used to characterize the energy of the radiation integrated over the whole flash, without attempting to identify individual processes. The dissertation analyzes the conditions under which this integrated energy can be used to estimate the distance to the flash. It is shown that flash-by-flash variations are large, but the integrated energy is very sensitive to changes in the distance, dropping as approximately the inverse cube root of the distance. Flashes can, in principle, be detected at distances of more than 100 km, but since the ground conductivity can vary, ranging accuracy drops dramatically at distances larger than 20 km. These limitations mean that individual flashes cannot be ranged accurately using a single narrowband detector, and the useful range is limited to 30 kilometers at the most. Nevertheless, simple statistical corrections are developed, which enable an accurate estimate of the distance to the closest edge of an active storm cell, as well as the approach speed. The results of the dissertation could therefore have practical applications in real-time short-range lightning detection and warning systems.
Resumo:
Interstellar clouds are not featureless, but show quite complex internal structures of filaments and clumps when observed with high enough resolution. These structures have been generated by 1) turbulent motions driven mainly by supernovae, 2) magnetic fields working on the ions and, through neutral-ion collisions, on neutral gas as well, and 3) self-gravity pulling a dense clump together to form a new star. The study of the cloud structure gives us information on the relative importance of each of these mechanisms, and helps us to gain a better understanding of the details of the star formation process. Interstellar dust is often used as a tracer for the interstellar gas which forms the bulk of the interstellar matter. Some of the methods that are used to derive the column density are summarized in this thesis. A new method, which uses the scattered light to map the column density in large fields with high spatial resolution, is introduced. This thesis also takes a look at the grain alignment with respect to the magnetic fields. The aligned grains give rise to the polarization of starlight and dust emission, thus revealing the magnetic field. The alignment mechanisms have been debated for the last half century. The strongest candidate at present is the radiative torques mechanism. In the first four papers included in this thesis, the scattered light method of column density estimation is formulated, tested in simulations, and finally used to obtain a column density map from observations. They demonstrate that the scattered light method is a very useful and reliable tool in column density estimation, and is able to provide higher resolution than the near-infrared color excess method. These two methods are complementary. The derived column density maps are also used to gain information on the dust emissivity within the observed cloud. The two final papers present simulations of polarized thermal dust emission assuming that the alignment happens by the radiative torques mechanism. We show that the radiative torques can explain the observed decline of the polarization degree towards dense cores. Furthermore, the results indicate that the dense cores themselves might not contribute significantly to the polarized signal, and hence one needs to be careful when interpreting the observations and deriving the magnetic field.
Resumo:
This thesis consists of four research papers and an introduction providing some background. The structure in the universe is generally considered to originate from quantum fluctuations in the very early universe. The standard lore of cosmology states that the primordial perturbations are almost scale-invariant, adiabatic, and Gaussian. A snapshot of the structure from the time when the universe became transparent can be seen in the cosmic microwave background (CMB). For a long time mainly the power spectrum of the CMB temperature fluctuations has been used to obtain observational constraints, especially on deviations from scale-invariance and pure adiabacity. Non-Gaussian perturbations provide a novel and very promising way to test theoretical predictions. They probe beyond the power spectrum, or two point correlator, since non-Gaussianity involves higher order statistics. The thesis concentrates on the non-Gaussian perturbations arising in several situations involving two scalar fields, namely, hybrid inflation and various forms of preheating. First we go through some basic concepts -- such as the cosmological inflation, reheating and preheating, and the role of scalar fields during inflation -- which are necessary for the understanding of the research papers. We also review the standard linear cosmological perturbation theory. The second order perturbation theory formalism for two scalar fields is developed. We explain what is meant by non-Gaussian perturbations, and discuss some difficulties in parametrisation and observation. In particular, we concentrate on the nonlinearity parameter. The prospects of observing non-Gaussianity are briefly discussed. We apply the formalism and calculate the evolution of the second order curvature perturbation during hybrid inflation. We estimate the amount of non-Gaussianity in the model and find that there is a possibility for an observational effect. The non-Gaussianity arising in preheating is also studied. We find that the level produced by the simplest model of instant preheating is insignificant, whereas standard preheating with parametric resonance as well as tachyonic preheating are prone to easily saturate and even exceed the observational limits. We also mention other approaches to the study of primordial non-Gaussianities, which differ from the perturbation theory method chosen in the thesis work.
Resumo:
When ordinary nuclear matter is heated to a high temperature of ~ 10^12 K, it undergoes a deconfinement transition to a new phase, strongly interacting quark-gluon plasma. While the color charged fundamental constituents of the nuclei, the quarks and gluons, are at low temperatures permanently confined inside color neutral hadrons, in the plasma the color degrees of freedom become dominant over nuclear, rather than merely nucleonic, volumes. Quantum Chromodynamics (QCD) is the accepted theory of the strong interactions, and confines quarks and gluons inside hadrons. The theory was formulated in early seventies, but deriving first principles predictions from it still remains a challenge, and novel methods of studying it are needed. One such method is dimensional reduction, in which the high temperature dynamics of static observables of the full four-dimensional theory are described using a simpler three-dimensional effective theory, having only the static modes of the various fields as its degrees of freedom. A perturbatively constructed effective theory is known to provide a good description of the plasma at high temperatures, where asymptotic freedom makes the gauge coupling small. In addition to this, numerical lattice simulations have, however, shown that the perturbatively constructed theory gives a surprisingly good description of the plasma all the way down to temperatures a few times the transition temperature. Near the critical temperature, the effective theory, however, ceases to give a valid description of the physics, since it fails to respect the approximate center symmetry of the full theory. The symmetry plays a key role in the dynamics near the phase transition, and thus one expects that the regime of validity of the dimensionally reduced theories can be significantly extended towards the deconfinement transition by incorporating the center symmetry in them. In the introductory part of the thesis, the status of dimensionally reduced effective theories of high temperature QCD is reviewed, placing emphasis on the phase structure of the theories. In the first research paper included in the thesis, the non-perturbative input required in computing the g^6 term in the weak coupling expansion of the pressure of QCD is computed in the effective theory framework at an arbitrary number of colors. The two last papers on the other hand focus on the construction of the center-symmetric effective theories, and subsequently the first non-perturbative studies of these theories are presented. Non-perturbative lattice simulations of a center-symmetric effective theory for SU(2) Yang-Mills theory show --- in sharp contrast to the perturbative setup --- that the effective theory accommodates a phase transition in the correct universality class of the full theory. This transition is seen to take place at a value of the effective theory coupling constant that is consistent with the full theory coupling at the critical temperature.
Resumo:
The purpose of this research is to identify the optimal poverty policy for a welfare state. Poverty is defined by income. Policies for reducing poverty are considered primary, and those for reducing inequality secondary. Poverty is seen as a function of the income transfer system within a welfare state. This research presents a method for optimising this function for the purposes of reducing poverty. It is also implemented in the representative population sample within the Income Distribution Data. SOMA simulation model is used. The iterative simulation process is continued until a level of poverty is reached at which improvements can no longer be made. Expenditures and taxes are kept in balance during the process. The result consists of two programmes. The first programme (social assistance programme) was formulated using five social assistance parameters, all of which dealt with the norms of social assistance for adults (€/month). In the second programme (basic benefits programme), in which social assistance was frozen at the legislative level of 2003, the parameter with the strongest poverty reduction effect turned out to be one of the basic unemployment allowances. This was followed by the norm of the national pension for a single person, two parameters related to housing allowance, and the norm for financial aid for students of higher education institutions. The most effective financing parameter measured by gini-coefficient in all programmes was the percent of capital taxation. Furthermore, these programmes can also be examined in relation to their costs. The social assistance programme is significantly cheaper than the basic benefits programme, and therefore with regard to poverty, the social assistance programme is more cost effective than the basic benefits programme. Therefore, public demand for raising the level of basic benefits does not seem to correspond to the most cost effective poverty policy. Raising basic benefits has most effect on reducing poverty within the group of people whose basic benefits are raised. Raising social assistance, on the other hand, seems to have a strong influence on the poverty of all population groups. The most significant outcome of this research is the development of a method through which a welfare state’s income transfer-based safety net, which has severely deteriorated in recent decades, might be mended. The only way of doing so involves either social assistance or some forms of basic benefits and supplementing these by modifying social assistance.
Resumo:
The study seeks to find out whether the real burden of the personal taxation has increased or decreased. In order to determine this, we investigate how the same real income has been taxed in different years. Whenever the taxes for the same real income for a given year are higher than for the base year, the real tax burden has increased. If they are lower, the real tax burden has decreased. The study thus seeks to estimate how changes in the tax regulations affect the real tax burden. It should be kept in mind that the progression in the central government income tax schedule ensures that a real change in income will bring about a change in the tax ration. In case of inflation when the tax schedules are kept nominally the same will also increase the real tax burden. In calculations of the study it is assumed that the real income remains constant, so that we can get an unbiased measure of the effects of governmental actions in real terms. The main factors influencing the amount of income taxes an individual must pay are as follows: - Gross income (income subject to central and local government taxes). - Deductions from gross income and taxes calculated according to tax schedules. - The central government income tax schedule (progressive income taxation). - The rates for the local taxes and for social security payments (proportional taxation). In the study we investigate how much a certain group of taxpayers would have paid in taxes according to the actual tax regulations prevailing indifferent years if the income were kept constant in real terms. Other factors affecting tax liability are kept strictly unchanged (as constants). The resulting taxes, expressed in fixed prices, are then compared to the taxes levied in the base year (hypothetical taxation). The question we are addressing is thus how much taxes a certain group of taxpayers with the same socioeconomic characteristics would have paid on the same real income according to the actual tax regulations prevailing in different years. This has been suggested as the main way to measure real changes in taxation, although there are several alternative measures with essentially the same aim. Next an aggregate indicator of changes in income tax rates is constructed. It is designed to show how much the taxation of income has increased or reduced from one year to next year on average. The main question remains: How aggregation over all income levels should be performed? In order to determine the average real changes in the tax scales the difference functions (difference between actual and hypothetical taxation functions) were aggregated using taxable income as weights. Besides the difference functions, the relative changes in real taxes can be used as indicators of change. In this case the ratio between the taxes computed according to the new and the old situation indicates whether the taxation has become heavier or easier. The relative changes in tax scales can be described in a way similar to that used in describing the cost of living, or by means of price indices. For example, we can use Laspeyres´ price index formula for computing the ratio between taxes determined by the new tax scales and the old tax scales. The formula answers the question: How much more or less will be paid in taxes according to the new tax scales than according to the old ones when the real income situation corresponds to the old situation. In real terms the central government tax burden experienced a steady decline from its high post-war level up until the mid-1950s. The real tax burden then drifted upwards until the mid-1970s. The real level of taxation in 1975 was twice that of 1961. In the 1980s there was a steady phase due to the inflation corrections of tax schedules. In 1989 the tax schedule fell drastically and from the mid-1990s tax schedules have decreased the real tax burden significantly. Local tax rates have risen continuously from 10 percent in 1948 to nearly 19 percent in 2008. Deductions have lowered the real tax burden especially in recent years. Aggregate figures indicate how the tax ratio for the same real income has changed over the years according to the prevailing tax regulations. We call the tax ratio calculated in this manner the real income tax ratio. A change in the real income tax ratio depicts an increase or decrease in the real tax burden. The real income tax ratio declined after the war for some years. In the beginning of the 1960s it nearly doubled to mid-1970. From mid-1990s the real income tax ratio has fallen about 35 %.
Resumo:
Agriculture’s contribution to climate change is controversial as it is a significant source of greenhouse gases but also a sink of carbon. Hence its economic and technological potential to mitigate climate change have been argued to be noteworthy. However, social profitability of emission mitigation is a result from factors among emission reductions such as surface water quality impact or profit from production. Consequently, to value comprehensive results of agricultural climate emission mitigation practices, these co-effects to environment and economics should be taken into account. The objective of this thesis was to develop an integrated economic and ecological model to analyse the social welfare of crop cultivation in Finland on distinctive cultivation technologies, conventional tillage and conservation tillage (no-till). Further, we ask whether it would be privately or socially profitable to allocate some of barley cultivation for alternative land use, such as green set-aside or afforestation, when production costs, GHG’s and water quality impacts are taken into account. In the theoretical framework we depict the optimal input use and land allocation choices in terms of environmental impacts and profit from production and derive the optimal tax and payment policies for climate and water quality friendly land allocation. The empirical application of the model uses Finnish data about production cost and profit structure and environmental impacts. According to our results, given emission mitigation practices are not self-evidently beneficial for farmers or society. On the contrary, in some cases alternative land allocation could even reduce social welfare, profiting conventional crop cultivation. This is the case regarding mineral soils such as clay and silt soils. On organic agricultural soils, climate mitigation practices, in this case afforestation and green fallow give more promising results, decreasing climate emissions and nutrient runoff to water systems. No-till technology does not seem to profit climate mitigation although it does decrease other environmental impacts. Nevertheless, the data behind climate emission mitigation practices impact to production and climate is limited and partly contradictory. More specific experiment studies on interaction of emission mitigation practices and environment would be needed. Further study would be important. Particularly area specific production and environmental factors and also food security and safety and socio-economic impacts should be taken into account.
Resumo:
This thesis examines the media debate on pensions. The case analysed in the thesis is the debate that sharpened after the Finnish government made the decision to raise the retirement age. The analysed data consists of articles published in the printed media during one month after the decision was made on 24th of February in 2009. The aim of the study is to describe how the decision is argued about by different positions of speakers and how retirement is justified from the perspective of the individual. Furthermore, the purpose is to discover different ways of discussing the pensioner. The theoretical frame for this study is social constructivism, which understands reality as socially constructed with language. From this perspective, media texts can be seen as one form of shaping reality. The data is analysed by using different methods. Thematisation is used to discover the key topics, and quantification is used to examine the prevalence of different arguments. The method in which the speaker’s ways of speaking is analysed in different participant categories I call “a speaker position analysis”. The debate around the decision to raise the retirement age highlight the power struggle both between the government and the opposition as well as the government and employee unions. One thing all discussants agree is the need to raise the retirement age. From the individual's perspective, retirement is justified mostly with hard working conditions and inadequacy of health. The pensioner's image is appearing gloomy in most discourses. Prevailing discourses are seeing a pensioner either sick and tired or someone who is not good for work and has lost his dignity. The debate around the decision is intertwined around the concepts of welfare state and individual's well-being. In the postmodern society, human preferences are individualised. Welfare state means different things to different people, as well as the individual's subjective perception of well-being is unique. These two aspects are the ones which raise the tension in the analysed media debate.
Resumo:
Evidence is reported for a narrow structure near the $J/\psi\phi$ threshold in exclusive $B^+\to J/\psi\phi K^+$ decays produced in $\bar{p} p $ collisions at $\sqrt{s}=1.96 \TeV$. A signal of $14\pm5$ events, with statistical significance in excess of 3.8 standard deviations, is observed in a data sample corresponding to an integrated luminosity of $2.7 \ifb$, collected by the CDF II detector. The mass and natural width of the structure are measured to be $4143.0\pm2.9(\mathrm{stat})\pm1.2(\mathrm{syst}) \MeVcc$ and $11.7^{+8.3}_{-5.0}(\mathrm{stat})\pm3.7(\mathrm{syst}) \MeVcc$.
Resumo:
The 1980s and the early 1990s have proved to be an important turning point in the history of the Nordic welfare states. After this breaking point, the Nordic social order has been built upon a new foundation. This study shows that the new order is mainly built upon new hierarchies and control mechanisms that have been developed consistently through economic and labour market policy measures. During the post-war period Nordic welfare states to an increasing extent created equality of opportunity and scope for agency among people. Public social services were available for all and the tax-benefit system maintained a level income distribution. During this golden era of Nordic welfare state, the scope for agency was, however, limited by social structures. Public institutions and law tended to categorize people according to their life circumstances ascribing them a predefined role. In the 1980s and 1990s this collectivist social order began to mature and it became subject to political renegotiation. Signs of a new social order in the Nordic countries have included the liberation of the financial markets, the privatizing of public functions and redefining the role of the public sector. It is now possible to reassess the ideological foundations of this new order. As a contrast to widely used political rhetoric, the foundation of the new order has not been the ideas of individual freedom or choice. Instead, the most important aim appears to have been to control and direct people to act in accordance with the rules of the market. The various levels of government and the social security system have been redirected to serve this goal. Instead of being a mechanism for redistributing income, the Nordic social security system has been geared towards creating new hierarchies on the Nordic labour markets. During the past decades, conditions for receiving income support and unemployment benefit have been tightened in all Nordic countries. As a consequence, people have been forced to accept deteriorating terms and conditions on the labour market. Country-specific variations exist, however: in sum Sweden has been most conservative, Denmark most innovative and Finland most radical in reforming labour market policy. The new hierarchies on the labour market have co-incided with slow or non-existent growth of real wages and with a strong growth of the share of capital income. Slow growth of real wages has kept inflation low and thus secured the value of capital. Societal development has thus progressed from equality of opportunity during the age of the welfare states towards a hierarchical social order where the majority of people face increasing constraints and where a fortunate minority enjoys prosperity and security.
Resumo:
Tämän tutkimuksen tavoitteena oli selvittää tilalla määritetyn hyvinvoinnin yhteyttä emakoiden tuotantotuloksiin. Hyvinvointia arvioitiin suomalaisen hyvinvointi-indeksin, A-indeksi, avulla. Tuotantotuloksina käytettiin kahta erilaista tuotosaineistoa, jotka molemmat pohjautuivat kansalliseen tuotosseuranta aineistoon. Hyvinvointimääritykset tehtiin 30 porsastuotantosikalassa maaliskuun 2007 aikana. A-indeksi koostuu kuudesta kategoriasta ’liikkumismahdollisuudet’, ’alustan ominaisuudet’, ’sosiaaliset kontaktit’, ’valo, ilma ja melu’, ’ruokinta ja veden saanti’ sekä ’eläinten terveys ja hoidon taso’. Jokaisessa kategoriassa on 3-10 pääosin ympäristöperäistä muuttujaa, jotka vaihtelevat osastoittain. Maksimipistemäärä osastolle on 100. Hyvinvointimittaukset tehtiin porsitus-, tiineytys- ja joutilasosastoilla. Erillisten tiineytysosastojen pienen lukumäärän takia (n=7) tilakohtaiset tiineytys- ja joutilasosastopisteet yhdistettiin ja keskiarvoja käytettiin analyyseissä. Yhteyksiä tuotokseen tutkittiin kahden eri aineiston avulla 1) Tilaraportti aineisto (n=29) muodostuu muokkaamattomista tila- ja tuotostuloksista tilavierailua edeltävän vuoden ajalta, 2) POTSIaineisto (n=30) muodostuu POTSI-ohjelmalla (MTT) muokatusta tuotantoaineistosta, joka sisältää managementtiryhmän (tila, vuosi, vuodenaika) vaikutuksen ensikoiden ja emakoiden pahnuekohtaiseen tuotokseen. Yhteyksiä analysointiin korrelaatio- ja regressioanalyysien avulla. Vaikka osallistuminen tutkimukseen oli vapaaehtoista, molempien tuotantoaineistojen perusteella tutkimustilat edustavat keskituottoista suomalaista sikatilaa. A-indeksin kokonaispisteet vaihtelivat välillä 37,5–64,0 porsitusosastolla ja 39,5–83,5 joutilasosastolla. Tilaraporttiaineistoa käytettäessä paremmat pisteet porsitusosaston ’eläinten terveys ja hoidon taso’ -kategoriasta lyhensivät eläinten lisääntymissykliä, lisäsivät syntyvien pahnueiden ja porsaiden määrää sekä alensivat kuolleena syntyneiden lukumäärää. Regressiomallin mukaan ’eläinten terveys ja hoidon taso’ -kategoria selitti syntyvien porsaiden lukumäärän, porsimisvälin pituuden sekä keskiporsimiskerran vaihtelua. Paremmat pisteet joutilasosaston ’liikkumismahdollisuudet’ kategoriasta alensivat syntyneiden pahnueiden sekä syntyneiden että vieroitettujen porsaiden lukumäärää. Regressiomallin mukaan ensikkopahnueiden osuus ja ”liikkumismahdollisuudet” kategorian pisteet selittivät vieroitettujen porsaiden lukumäärän vaihtelua. POTSI-aineiston yhteydessä kuolleena syntyneiden porsaiden lukumäärän aleneminen oli ensikoilla yhteydessä parempiin porsitusosaston ’sosiaalisiin kontakteihin’ ja emakoilla puolestaan joutilasosaston parempiin ’eläinten terveys ja hoidon taso’ pisteisiin. Kahden eri tuotantoaineiston avulla saadut tulokset erosivat toisistaan. Seuraavissa tutkimuksissa onkin suositeltavampaa käyttää Tilaraporttiaineistoja, joissa tuotokset ilmoitetaan vuosikohtaisina. Tämän tutkimuksen perusteella hyvinvoinnilla ja tuotoksella on yhteyksiä, joilla on myös merkittävää taloudellista vaikutusta. Erityisesti hyvä eläinten hoito ja eläinten terveys lisäävät tuotettujen porsaiden määrää ja lyhentävät lisääntymiskiertoa. Erityishuomiota tulee kiinnittää vapaana olevien joutilaiden emakoiden sosiaaliseen stressiin ja rehunsaannin varmistamiseen kaikille yksilöille.
Resumo:
In this paper I provide some empirical answers to important questions such as the determinants of price inflation and the role of inflation polices. The results indicate that monetary policy is surprisingly impotent as a device for controlling inflation and there is little support that it influences the real variables. The low inflation after the Finnish devaluations in the beginning of 90s is foremost due to a previous imbalance in the labor markets and depressed aggregate demand.