48 resultados para consumer price indices

em Helda - Digital Repository of University of Helsinki


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The study seeks to find out whether the real burden of the personal taxation has increased or decreased. In order to determine this, we investigate how the same real income has been taxed in different years. Whenever the taxes for the same real income for a given year are higher than for the base year, the real tax burden has increased. If they are lower, the real tax burden has decreased. The study thus seeks to estimate how changes in the tax regulations affect the real tax burden. It should be kept in mind that the progression in the central government income tax schedule ensures that a real change in income will bring about a change in the tax ration. In case of inflation when the tax schedules are kept nominally the same will also increase the real tax burden. In calculations of the study it is assumed that the real income remains constant, so that we can get an unbiased measure of the effects of governmental actions in real terms. The main factors influencing the amount of income taxes an individual must pay are as follows: - Gross income (income subject to central and local government taxes). - Deductions from gross income and taxes calculated according to tax schedules. - The central government income tax schedule (progressive income taxation). - The rates for the local taxes and for social security payments (proportional taxation). In the study we investigate how much a certain group of taxpayers would have paid in taxes according to the actual tax regulations prevailing indifferent years if the income were kept constant in real terms. Other factors affecting tax liability are kept strictly unchanged (as constants). The resulting taxes, expressed in fixed prices, are then compared to the taxes levied in the base year (hypothetical taxation). The question we are addressing is thus how much taxes a certain group of taxpayers with the same socioeconomic characteristics would have paid on the same real income according to the actual tax regulations prevailing in different years. This has been suggested as the main way to measure real changes in taxation, although there are several alternative measures with essentially the same aim. Next an aggregate indicator of changes in income tax rates is constructed. It is designed to show how much the taxation of income has increased or reduced from one year to next year on average. The main question remains: How aggregation over all income levels should be performed? In order to determine the average real changes in the tax scales the difference functions (difference between actual and hypothetical taxation functions) were aggregated using taxable income as weights. Besides the difference functions, the relative changes in real taxes can be used as indicators of change. In this case the ratio between the taxes computed according to the new and the old situation indicates whether the taxation has become heavier or easier. The relative changes in tax scales can be described in a way similar to that used in describing the cost of living, or by means of price indices. For example, we can use Laspeyres´ price index formula for computing the ratio between taxes determined by the new tax scales and the old tax scales. The formula answers the question: How much more or less will be paid in taxes according to the new tax scales than according to the old ones when the real income situation corresponds to the old situation. In real terms the central government tax burden experienced a steady decline from its high post-war level up until the mid-1950s. The real tax burden then drifted upwards until the mid-1970s. The real level of taxation in 1975 was twice that of 1961. In the 1980s there was a steady phase due to the inflation corrections of tax schedules. In 1989 the tax schedule fell drastically and from the mid-1990s tax schedules have decreased the real tax burden significantly. Local tax rates have risen continuously from 10 percent in 1948 to nearly 19 percent in 2008. Deductions have lowered the real tax burden especially in recent years. Aggregate figures indicate how the tax ratio for the same real income has changed over the years according to the prevailing tax regulations. We call the tax ratio calculated in this manner the real income tax ratio. A change in the real income tax ratio depicts an increase or decrease in the real tax burden. The real income tax ratio declined after the war for some years. In the beginning of the 1960s it nearly doubled to mid-1970. From mid-1990s the real income tax ratio has fallen about 35 %.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the thesis we consider inference for cointegration in vector autoregressive (VAR) models. The thesis consists of an introduction and four papers. The first paper proposes a new test for cointegration in VAR models that is directly based on the eigenvalues of the least squares (LS) estimate of the autoregressive matrix. In the second paper we compare a small sample correction for the likelihood ratio (LR) test of cointegrating rank and the bootstrap. The simulation experiments show that the bootstrap works very well in practice and dominates the correction factor. The tests are applied to international stock prices data, and the .nite sample performance of the tests are investigated by simulating the data. The third paper studies the demand for money in Sweden 1970—2000 using the I(2) model. In the fourth paper we re-examine the evidence of cointegration between international stock prices. The paper shows that some of the previous empirical results can be explained by the small-sample bias and size distortion of Johansen’s LR tests for cointegration. In all papers we work with two data sets. The first data set is a Swedish money demand data set with observations on the money stock, the consumer price index, gross domestic product (GDP), the short-term interest rate and the long-term interest rate. The data are quarterly and the sample period is 1970(1)—2000(1). The second data set consists of month-end stock market index observations for Finland, France, Germany, Sweden, the United Kingdom and the United States from 1980(1) to 1997(2). Both data sets are typical of the sample sizes encountered in economic data, and the applications illustrate the usefulness of the models and tests discussed in the thesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study is divided into two parts: a methodological part and a part which focuses on the saving of households. In the 1950 s both the concepts as well as the household surveys themselves went through a rapid change. The development of national accounts was motivated by the Keynesian theory and the 1940 s and 1950 s were an important time for the development of the national accounts. Before this, saving was understood as cash money or money deposited in bank accounts but the changes in this era led to the establishment of the modern saving concept. Separate from the development of national accounts, household surveys were established. Household surveys have been conducted in Finland from the beginning of the 20th century. At that time surveys were conducted in order to observe the working class living standard and as a result, these were based on the tradition of welfare studies. Also a motivation for undertaking the studies was to estimate weights for the consumer price index. A final reason underpinning the government s interest in observing this data regarded whether there were any reasons for the working class to become radicalised and therefore adopt revolutionary ideas. As the need for the economic analysis increased and the data requirements underlying the political decision making process also expanded, the two traditions and thus, the two data sources started to integrate. In the 1950s the household surveys were compiled distinctly from the national accounts and they were virtually unaffected by economic theory. The 1966 survey was the first study that was clearly motivated by national accounts and saving analysis. This study also covered the whole population rather than it being limited to just part of it. It is essential to note that the integration of these two traditions is still continuing. This recently took a big step forward as the Stiglitz, Sen and Fitoussi Committee Report was introduced and thus, the criticism of the current measure of welfare was taken seriously. The Stiglitz report emphasises that the focus in the measurement of welfare should be on the households and the macro as well as micro perspective should be included in the analysis. In this study the national accounts are applied to the household survey data from the years 1950-51, 1955-56 and 1959-60. The first two studies cover the working population of towns and market towns and the last survey covers the population of rural areas. The analysis is performed at three levels: macro economic level, meso level, i.e. at the level of different types of households, and micro level, i.e. at the level of individual households. As a result it analyses how the different households saved and consumed and how that changed during the 1950 s.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The safety of food has become an increasingly interesting issue to consumers and the media. It has also become a source of concern, as the amount of information on the risks related to food safety continues to expand. Today, risk and safety are permanent elements within the concept of food quality. Safety, in particular, is the attribute that consumers find very difficult to assess. The literature in this study consists of three main themes: traceability; consumer behaviour related to both quality and safety issues and perception of risk; and valuation methods. The empirical scope of the study was restricted to beef, because the beef labelling system enables reliable tracing of the origin of beef, as well as attributes related to safety, environmental friendliness and animal welfare. The purpose of this study was to examine what kind of information flows are required to ensure quality and safety in the food chain for beef, and who should produce that information. Studying the willingness to pay of consumers makes it possible to determine whether the consumers consider the quantity of information available on the safety and quality of beef sufficient. One of the main findings of this study was that the majority of Finnish consumers (73%) regard increased quality information as beneficial. These benefits were assessed using the contingent valuation method. The results showed that those who were willing to pay for increased information on the quality and safety of beef would accept an average price increase of 24% per kilogram. The results showed that certain risk factors impact consumer willingness to pay. If the respondents considered genetic modification of food or foodborne zoonotic diseases as harmful or extremely harmful risk factors in food, they were more likely to be willing to pay for quality information. The results produced by the models thus confirmed the premise that certain food-related risks affect willingness to pay for beef quality information. The results also showed that safety-related quality cues are significant to the consumers. In the first place, the consumers would like to receive information on the control of zoonotic diseases that are contagious to humans. Similarly, other process-control related information ranked high among the top responses. Information on any potential genetic modification was also considered important, even though genetic modification was not regarded as a high risk factor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perhaps the most fundamental prediction of financial theory is that the expected returns on financial assets are determined by the amount of risk contained in their payoffs. Assets with a riskier payoff pattern should provide higher expected returns than assets that are otherwise similar but provide payoffs that contain less risk. Financial theory also predicts that not all types of risks should be compensated with higher expected returns. It is well-known that the asset-specific risk can be diversified away, whereas the systematic component of risk that affects all assets remains even in large portfolios. Thus, the asset-specific risk that the investor can easily get rid of by diversification should not lead to higher expected returns, and only the shared movement of individual asset returns – the sensitivity of these assets to a set of systematic risk factors – should matter for asset pricing. It is within this framework that this thesis is situated. The first essay proposes a new systematic risk factor, hypothesized to be correlated with changes in investor risk aversion, which manages to explain a large fraction of the return variation in the cross-section of stock returns. The second and third essays investigate the pricing of asset-specific risk, uncorrelated with commonly used risk factors, in the cross-section of stock returns. The three essays mentioned above use stock market data from the U.S. The fourth essay presents a new total return stock market index for the Finnish stock market beginning from the opening of the Helsinki Stock Exchange in 1912 and ending in 1969 when other total return indices become available. Because a total return stock market index for the period prior to 1970 has not been available before, academics and stock market participants have not known the historical return that stock market investors in Finland could have achieved on their investments. The new stock market index presented in essay 4 makes it possible, for the first time, to calculate the historical average return on the Finnish stock market and to conduct further studies that require long time-series of data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The focus of this study was to examine the constructions of the educable subject of the lifelong learning (LLL) narrative in the narrative life histories of adult students at general upper secondary school for adults (GUSSA). In this study lifelong learning has been defined as a cultural narrative on education, “a system of political thinking” that is not internally consistent, but has contradictory themes embedded within it (Billig et al., 1988). As earlier research has shown and this study also confirms, the LLL narrative creates differences between those who are included and those who fall behind and are excluded from the learning society ideal. Educability expresses socially constructed interpretations on who benefit from education and who should be educated and how. The presupposition in this study has been that contradictions between the LLL narrative and the so-called traditional constructions of educability are likely to be constructed as the former relies on the all-inclusive interpretation of educability and the latter on the meritocratic model of educating individuals based on their innate abilities. The school system continues to uphold the institutionalized ethos of educability that ranks students into the categories “bright”, “mediocre”, and “poor” (Räty & Snellman, 1998) on the basis of their abilities, including gender-related differences as well as differences based on social class. Traditional age-related norms also persist, for example general upper secondary education is normatively completed in youth and not in adulthood, and the formal learning context continues to outweigh both non-formal and informal learning. Moreover, in this study the construction of social differences in relation to educability and, thereafter unequal access to education has been examined in relation to age, social class, and gender. The biographical work of the research participants forms a peephole that permits the examination of the dilemmatic nature of the constructions of educability in this study. Formal general upper secondary education in adulthood is situated on the border between the traditional and the LLL narratives on educability: participation in GUSSA inevitably means that one’s ability and competence as a student and learner becomes reassessed through the assessment criteria maintained by schools, whereas according to the principles of LLL everyone is educable; everyone is encouraged to learn throughout their lives regardless of age, social class, or gender. This study is situated in the field of adult education, sociology of education, and social psychological research on educability, having also been informed by feminist studies. Moreover, this study contributes to narrative life history research combining the structural analysis of narratives (Labov & Waletzky, 1997), i.e. mini-stories within life history, with the analysis of the life histories as structural and thematic wholes and the creation of coherence in them; thus, permitting both micro and macro analyses. On accounting for the discontinuity created by participation in general upper secondary school study in adulthood and not normatively in youth, the GUSSA students construct coherence in relation to their ability and competence as students and learners. The seven case studies illuminate the social differences constructed in relation to educability, i.e. social class, gender, age, and the “new category of student and learner”. In the data of this study, i.e. 20 general upper secondary school adult graduates’ narrative life histories primarily generated through interviews, two main coherence patterns of the adult educable subject emerge. The first performance-oriented pattern displays qualities that are closely related to the principles of LLL. Contrary to the principles of lifewide learning, however, the documentation of one’s competence through formal qualifications outweighs non-formal and informal learning in preparation for future change and the competition for further education, professional careers, and higher social positions. The second flexible learning pattern calls into question the status of formal, especially theoretical and academically oriented education; inner development is seen as more important than such external signs of development — grades and certificates. Studying and learning is constructed as a hobby and as a means to a more satisfactory life as opposed to a socially and culturally valued serious occupation leading to further education and career development. Consequently, as a curious, active, and independent learner, this educable but not readily employable subject is pushed into the periphery of lifelong learning. These two coherence patterns of the adult educable subject illuminate who is to be educated and how. The educable and readily employable LLL subject is to participate in formal education in order to achieve qualifications for working life, whereas the educable but not employable subject may utilize lifewide learning for her/his own pleasure. Key words: adult education, general upper secondary school for adults, educability, lifelong learning, narrative life history

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines the narrative construction of consumerism in Finnish consumer culture in the early 21st century. The objects of the study are consumer life stories and essays on environmentally friendly consumption, written by 15-19-year-old high school students. Moreover, group discussions were used as additional research material. The data was gathered at five high schools in different areas of Finland. Young people's consumer narratives are interpreted through cultural stories and consumer ethos such as self-control, gratification and green consumerism. The narrative research approach is used to analyse what types of consumer positions these young people construct in stories on their own consumer history, and what kinds of ideas and thought patterns they construct on green consumerism. The study creates a multifaceted image of young people as agents in consumer society. They construct archetypical stories of wastrels and scrooges, as well as prudent and environmentally friendly consumers. Consumption and expenditure are however mostly a continuous battle between self-control and giving in to gratification. This reality is illustrated among other things by clever expressions invented by young people, such as Carefree Pennywise, Prudent Hedonist and Wasteful Scrooge. In their narratives, young people also analyse the usefulness - or uselessness - of their decisions on consumption, as well as develop themselves into controlling and sensible consumers. This kind of virtuous consumer allows him/herself the joy and the gratification of consumption, as long as these are "kept in check". One's view of expenditure and consumption is not permanent. Consumerism may alter with time. A wastrel may grow up to be a young person in control of their desires, or a thrifty child may awaken to the pleasures of consumption in their teens. Consumerism may also be polyphonic: it may simultaneously - and even uncomplicatedly - be constructed upon the discourses of wastefulness, prudence, gratification and green consumerism. Young people allow for gratification to form a part of green consumerism, too: it is not simply restrictive self-denial. They also see many hurdles in the way of green consumerism, such as the elevated price of ecological products, and the difficulties of green consumer practices. The stories also show the gender division in green consumerism. For young men, ecological considerations offer elements for the construction of consumerism only on the very rare occasion, whereas striving for day-to-day green practices is typical for young women.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this thesis is to find out how dominant firms in a liberalised electricity market will react when they face an increase in the level of costs due to emissions trading, and how this will effect the price of electricity. The Nordic electricity market is chosen as the setting in which to examine the question, since recent studies on the subject suggest that interaction between electricity markets and emissions trading is very much dependent on conditions specific to each market area. There is reason to believe that imperfect competition prevails in the Nordic market, thus the issue is approached through the theory of oligopolistic competition. The generation capacity available at the market, marginal cost of electricity production and seasonal levels of demand form the data based on which the dominant firms are modelled using the Cournot model of competition. The calculations are made for two levels of demand, high and low, and with several values of demand elasticity. The producers are first modelled under no carbon costs and then by adding the cost of carbon dioxide at 20€/t to those technologies subject to carbon regulation. In all cases the situation under perfect competition is determined as a comparison point for the results of the Cournot game. The results imply that the potential for market power does exist on the Nordic market, but the possibility for exercising market power depends on the demand level. In season of high demand the dominant firms may raise the price significantly above competitive levels, and the situation is aggravated when the cost of carbon dioixide is accounted for. Under low demand leves there is no difference between perfect and imperfect competition. The results are highly dependent on the price elasticity of demand.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Breast reconstruction is performed for 10-15 % of women operated on for breast cancer. A popular method is the TRAM (transverse rectus abdominis musculocutaneous) flap formed of the patient’s own abdominal tissue, a part of one of the rectus abdominis muscles and a transverse skin-subcutis area over it. The flap can be raised as a pedicled or a free flap. The pedicled TRAM flap, based on its nondominant pedicle superior epigastric artery (SEA), is rotated to the chest so that blood flow through SEA continues. The free TRAM flap, based on its dominant pedicle deep inferior epigastric artery (DIEA), is detached from the abdomen, transferred to the chest, and DIEA and vein are anastomosed to vessels on the chest. Cutaneous necrosis is seen in 5–60 % of pedicled TRAM flaps and in 0–15 % of free TRAM flaps. This study was the first one to show with blood flow measurements that the cutaneous blood flow is more generous in free than in pedicled TRAM flaps. After this study the free TRAM flap has exceeded the pedicled flap in popularity as a breast reconstruction method, although the free flap it is technically a more demanding procedure than the pedicled TRAM flap. In pedicled flaps, a decrease in cutaneous blood flow was observed when DIEA was ligated. It seems that SEA cannot provide sufficient blood flow on the first postoperative days. The postoperative cutaneous blood flow in free TRAM flaps was more stable than in pedicled flaps. Development of cutaneous necrosis of pedicled TRAM flaps could be predicted based on intraoperative laser Doppler flowmetry (LDF) measurements. The LDF value on the contralateral skin of the flap decreased to 43 ± 7 % of the initial value after ligation of the DIEA in flaps developing cutaneous necrosis during the next week. Endothelin-1 (ET-1) is a powerful vasoconstrictory peptide secreted by vascular endothelial cells. A correlation was found between plasma ET-1 concentrations and peripheral vasoconstriction developing during and after breast reconstructions with a pedicled TRAM flap. ET-1 was not associated with the development of cutaneous necrosis. Felodipine, a vasodilating calcium channel antagonist, had no effect on plasma ET-1 concentrations, peripheral vasoconstriction or development of cutaneous necrosis in free TRAM flaps. Body mass index and thickness of abdominal were not associated with cutaneous necrosis in pedicled TRAM flaps.