43 resultados para Breakdown of consumption
em Repositório digital da Fundação Getúlio Vargas - FGV
Resumo:
This thesis is an analytical analysis of consumption in Brazil, based on data from the Consumer Expenditure Survey, years 2008 to 2009, collected by the Brazilian Institute of Geography and Statistics. The main aim of the thesis was to identify differences and similarities in consumption among Brazilian households, and estimate the importance of demographic and geographic characteristics. Initially, households belonging to different social classes and geographical regions were compared based on their consumption. For further insights, two cluster analyses were conducted. Firstly, households were grouped according to the absolute values of expenditures. Five clusters were discovered; cluster membership showed larger spending in all of the expense categories for households having higher income, and a substantial association with particular demographic variables, including as region, neighborhood, race and education. Secondly, cluster analysis was performed on proportionate distribution of total spending by every household. Five groups of households were revealed: Basic Consumers, the largest group that spends only on fundamental goods, Limited Spenders, which additionally purchase alcohol, tobacco, literature and telecommunication technologies, Mainstream Buyers, characterized by spending on clothing, personal care, entertainment and transport, Advanced Consumers, which have high relative expenses on financial and legal services, healthcare and education, and Exclusive Spenders, households distinguished by spending on vehicles, real estate and travelling.
Resumo:
We outline possible actions to be adopted by the European Union to ensure a better share of total coffee revenues to producers in developing countries. The way to this translates, ultimately, in producers receiving a fair price for the commodity they supply, i.e., a market price that results from fair market conditions in the whole coffee producing chain. We plead for proposals to take place in the consuming countries, as market conditions in the consuming-countries side of the coffee producing chain are not fair; market failures and ingenious distortions are responsible for the enormous asymmetry of gains in the two sides. The first of three proposals for consumer government supported actions is to help in the creation of domestic trading companies for achieving higher export volumes. These tradings would be associated to roasters that, depending on the final product envisaged, could perform the roasting in the country and export the roasted – and sometimes ground – coffee, breaking the increasing importers-exporters verticalisation. Another measure would be the systematic provision of basic intelligence on the consuming markets. Statistics of the quantities sold according to mode of consumption, by broad “categories of coffee” and point of sale, could be produced for each country. They should be matched to the exports/imports data and complemented by (aggregate) country statistics on the roasting sector. This would extremely help producing countries design their own market and producing strategies. Finally, a fund, backed by a common EU tax on roasted coffee – created within the single market tax harmonisation programme, is suggested. This European Coffee Fund would have two main projects. Together with the ICO, it would launch an advertising campaign on coffee in general, aimed at counterbalancing the increasing “brandification” of coffee. Basic information on the characteristics of the plant and the drink would be passed, and the effort could be extended to the future Eastern European members of the Union, as a further assurance that EU processors would not have a too privileged access to these new markets. A quality label for every coffee sold in the Union could complement this initiative, helping to create a level playing field for products from outside the EU. A second project would consist in a careful diversification effort, to take place in selected producing countries.
Resumo:
Lucas (1987) has shown a surprising result in business-cycle research, that the welfare cost of business cycles are relatively small. Using standard assumptions on preferences and a reasonable reduced form for consumption, we computed these welfare costs for the pre- and post-WWII era, using three alternative trend-cycle decomposition methods. The post-WWII period is very era this basic result is dramatically altered. For the Beveridge and Nelson decomposition, and reasonable preference parameter and discount values, we get a compensation of about 5% of consumption, which is by all means a sizable welfare cost (about US$ 1,000.00 a year).
Resumo:
Lucas (1987) has shown a surprising result in business-cycle research: the welfare cost of business cycles are very small. Our paper has several original contributions. First, in computing welfare costs, we propose a novel setup that separates the effects of uncertainty stemming from business-cycle fluctuations and economic-growth variation. Second, we extend the sample from which to compute the moments of consumption: the whole of the literature chose primarily to work with post-WWII data. For this period, actual consumption is already a result of counter-cyclical policies, and is potentially smoother than what it otherwise have been in their absence. So, we employ also pre-WWII data. Third, we take an econometric approach and compute explicitly the asymptotic standard deviation of welfare costs using the Delta Method. Estimates of welfare costs show major differences for the pre-WWII and the post-WWII era. They can reach up to 15 times for reasonable parameter values -β=0.985, and ∅=5. For example, in the pre-WWII period (1901-1941), welfare cost estimates are 0.31% of consumption if we consider only permanent shocks and 0.61% of consumption if we consider only transitory shocks. In comparison, the post-WWII era is much quieter: welfare costs of economic growth are 0.11% and welfare costs of business cycles are 0.037% - the latter being very close to the estimate in Lucas (0.040%). Estimates of marginal welfare costs are roughly twice the size of the total welfare costs. For the pre-WWII era, marginal welfare costs of economic-growth and business- cycle fluctuations are respectively 0.63% and 1.17% of per-capita consumption. The same figures for the post-WWII era are, respectively, 0.21% and 0.07% of per-capita consumption.
Resumo:
Lucas(1987) has shown a surprising result in business-cycle research: the welfare cost of business cycles are very small. Our paper has several original contributions. First, in computing welfare costs, we propose a novel setup that separates the effects of uncertainty stemming from business-cycle uctuations and economic-growth variation. Second, we extend the sample from which to compute the moments of consumption: the whole of the literature chose primarily to work with post-WWII data. For this period, actual consumption is already a result of counter-cyclical policies, and is potentially smoother than what it otherwise have been in their absence. So, we employ also pre-WWII data. Third, we take an econometric approach and compute explicitly the asymptotic standard deviation of welfare costs using the Delta Method. Estimates of welfare costs show major diferences for the pre-WWII and the post-WWII era. They can reach up to 15 times for reasonable parameter values = 0:985, and = 5. For example, in the pre-WWII period (1901-1941), welfare cost estimates are 0.31% of consumption if we consider only permanent shocks and 0.61% of consumption if we consider only transitory shocks. In comparison, the post-WWII era is much quieter: welfare costs of economic growth are 0.11% and welfare costs of business cycles are 0.037% the latter being very close to the estimate in Lucas (0.040%). Estimates of marginal welfare costs are roughly twice the size of the total welfare costs. For the pre-WWII era, marginal welfare costs of economic-growth and business-cycle uctuations are respectively 0.63% and 1.17% of per-capita consumption. The same gures for the post-WWII era are, respectively, 0.21% and 0.07% of per-capita consumption.
Resumo:
The main objective of this paper is to propose a novel setup that allows estimating separately the welfare costs of the uncertainty stemming from business-cycle uctuations and from economic-growth variation, when the two types of shocks associated with them (respectively,transitory and permanent shocks) hit consumption simultaneously. Separating these welfare costs requires dealing with degenerate bivariate distributions. Levis Continuity Theorem and the Disintegration Theorem allow us to adequately de ne the one-dimensional limiting marginal distributions. Under Normality, we show that the parameters of the original marginal distributions are not afected, providing the means for calculating separately the welfare costs of business-cycle uctuations and of economic-growth variation. Our empirical results show that, if we consider only transitory shocks, the welfare cost of business cycles is much smaller than previously thought. Indeed, we found it to be negative - -0:03% of per-capita consumption! On the other hand, we found that the welfare cost of economic-growth variation is relatively large. Our estimate for reasonable preference-parameter values shows that it is 0:71% of consumption US$ 208:98 per person, per year.
Resumo:
The objective of this paper is to test for optimality of consumption decisions at the aggregate level (representative consumer) taking into account popular deviations from the canonical CRRA utility model rule of thumb and habit. First, we show that rule-of-thumb behavior in consumption is observational equivalent to behavior obtained by the optimizing model of King, Plosser and Rebelo (Journal of Monetary Economics, 1988), casting doubt on how reliable standard rule-of-thumb tests are. Second, although Carroll (2001) and Weber (2002) have criticized the linearization and testing of euler equations for consumption, we provide a deeper critique directly applicable to current rule-of-thumb tests. Third, we show that there is no reason why return aggregation cannot be performed in the nonlinear setting of the Asset-Pricing Equation, since the latter is a linear function of individual returns. Fourth, aggregation of the nonlinear euler equation forms the basis of a novel test of deviations from the canonical CRRA model of consumption in the presence of rule-of-thumb and habit behavior. We estimated 48 euler equations using GMM, with encouraging results vis-a-vis the optimality of consumption decisions. At the 5% level, we only rejected optimality twice out of 48 times. Empirical-test results show that we can still rely on the canonical CRRA model so prevalent in macroeconomics: out of 24 regressions, we found the rule-of-thumb parameter to be statistically signi cant at the 5% level only twice, and the habit ƴ parameter to be statistically signi cant on four occasions. The main message of this paper is that proper return aggregation is critical to study intertemporal substitution in a representative-agent framework. In this case, we fi nd little evidence of lack of optimality in consumption decisions, and deviations of the CRRA utility model along the lines of rule-of-thumb behavior and habit in preferences represent the exception, not the rule.
Resumo:
This paper tests the optimality of consumption decisions at the aggregate level taking into account popular deviations from the canonical constant-relative-risk-aversion (CRRA) utility function model-rule of thumb and habit. First, based on the critique in Carroll (2001) and Weber (2002) of the linearization and testing strategies using euler equations for consumption, we provide extensive empirical evidence of their inappropriateness - a drawback for standard rule- of-thumb tests. Second, we propose a novel approach to test for consumption optimality in this context: nonlinear estimation coupled with return aggregation, where rule-of-thumb behavior and habit are special cases of an all encompassing model. We estimated 48 euler equations using GMM. At the 5% level, we only rejected optimality twice out of 48 times. Moreover, out of 24 regressions, we found the rule-of-thumb parameter to be statistically significant only twice. Hence, lack of optimality in consumption decisions represent the exception, not the rule. Finally, we found the habit parameter to be statistically significant on four occasions out of 24.
Resumo:
li consumption is log-Normal and is decomposed into a linear deterministic trend and a stationary cycle, a surprising result in business-cycle research is that the welfare gains of eliminating uncertainty are relatively small. A possible problem with such calculations is the dichotomy between the trend and the cyclical components of consumption. In this paper, we abandon this dichotomy in two ways. First, we decompose consumption into a deterministic trend, a stochastic trend, and a stationary cyclical component, calculating the welfare gains of cycle smoothing. Calculations are carried forward only after a careful discussion of the limitations of macroeconomic policy. Second, still under the stochastic-trend model, we incorporate a variable slope for consumption depending negatively on the overall volatility in the economy. Results are obtained for a variety of preference parameterizations, parameter values, and different macroeconomic-policy goals. They show that, once the dichotomy in the decomposition in consumption is abandoned, the welfare gains of cycle smoothing may be substantial, especially due to the volatility effect.
Resumo:
A model of overlapping generations in continuous time is composed. IndividuaIs pass through two distinct time periods during their life times. During the first period, they work, save and have a death probability equal to zero. During the second, from the periods T after birth, their probability of death changes to p and then they retire. Capital stock and the stationary state in come are calculated for two situations: in the first, people live from their accumulated capital after retirementj in the second, they live from a state transfer payment through income taxo To simplify matters, in this preliminary version, it is supposed that there is no population growth and that the instantaneous elasticity substitution of consumption is unitary.
Resumo:
This work explores how Argentina overcame the Great Depression and asks whether active macroeconomic interventions made any contribution to the recovery. In particular, we study Argentine macroeconomic policy as it deviated from gold-standard orthodoxy after the final suspension of convertibility in 1929. As elsewhere, fiscal policy in Argentina was conservative, and had little power to smooth output. Monetary policy became heterodox after 1929. The first and most important stage of institutional change took place with the switch from a metallic monetary regime to a fiduciary regime in 1931; the Caja de Conversión (Conversion Office, a currency board) began rediscounting as a means to sterilize gold outflows and avoid deflationary pressures, thus breaking from orthodox "mIes of the game." However, the actual injections of liquidity were small' and were not enough to fully offset the incipient monetary contractions: the "Keynes" effect was weak or negative. Rather, recovery derived from changes in beliefs and expectations surrounding the shift in the monetary and exchange-rate regime,and the delinking of gold flows and the money base. Agents perceivod a new regime, as shown by the path of consumption, investment, and estimated ex ante real interest rates: the "Mundell" effect was dominant. Notably, this change of regime predated a later, and supposedly more significant, stage of institutional reform, namely the creation of the central bank in 1935. Still, the extent of intervention was weak, and insufficient to fully offset externaI shocks to prices and money. Argentine macropolicy was heterodox in terms of the change of regime, but still conservative in terms of the tentative scope of the measures taken .
Resumo:
Lucas (2000) estimates that the US welfare costs of inflation are around 1% of GDP. This measurement is consistent with a speci…c distorting channel in terms of the Bailey triangle under the demand for monetary base schedule (outside money): the displacement of resources from the production of consumption goods to the household transaction time à la Baumol. Here, we consider also several new types of distortions in the manufacturing and banking industries. Our new evidences show that both banks and firms demand special occupational employments to avoid the inflation tax. We de…ne the concept of ”the foat labor”: The occupational employments that are aflected by the in‡ation rates. More administrative workers are hired relatively to the bluecollar workers for producing consumption goods. This new phenomenon makes the manufacturing industry more roundabout. To take into account this new stylized fact and others, we redo at same time both ”The model 5: A Banking Sector -2” formulated by Lucas (1993) and ”The Competitive Banking System” proposed by Yoshino (1993). This modelling allows us to characterize better the new types of misallocations. We …nd that the maximum value of the resources wasted by the US economy happened in the years 1980-81, after the 2nd oil shock. In these years, we estimate the excess resources that are allocated for every speci…c distorting channel: i) The US commercial banks spent additional resources of around 2% of GDP; ii) For the purpose of the firm foating time were used between 2.4% and 4.1% of GDP); and iii) For the household transaction time were allocated between 3.1% and 4.5 % of GDP. The Bailey triangle under the demand for the monetary base schedule represented around 1% of GDP, which is consistent with Lucas (2000). We estimate that the US total welfare costs of in‡ation were around 10% of GDP in terms of the consumption goods foregone. The big di¤erence between our results and Lucas (2000) are mainly due to the Harberger triangle in the market for loans (inside money) which makes part of the household transaction time, of the …rm ‡oat labor and of the distortion in the banking industry. This triangle arises due to the widening interest rates spread in the presence of a distorting inflation tax and under a fractionally reserve system. The Harberger triangle can represent 80% of the total welfare costs of inflation while the remaining percentage is split almost equally between the Bailey triangle and the resources used for the bank services. Finally, we formulate several theorems in terms of the optimal nonneutral monetary policy so as to compare with the classical monetary theory.
Resumo:
This paper investigates the role of consumption-wealth ratio on predicting future stock returns through a panel approach. We follow the theoretical framework proposed by Lettau and Ludvigson (2001), in which a model derived from a nonlinear consumer’s budget constraint is used to settle the link between consumption-wealth ratio and stock returns. Using G7’s quarterly aggregate and financial data ranging from the first quarter of 1981 to the first quarter of 2014, we set an unbalanced panel that we use for both estimating the parameters of the cointegrating residual from the shared trend among consumption, asset wealth and labor income, cay, and performing in and out-of-sample forecasting regressions. Due to the panel structure, we propose different methodologies of estimating cay and making forecasts from the one applied by Lettau and Ludvigson (2001). The results indicate that cay is in fact a strong and robust predictor of future stock return at intermediate and long horizons, but presents a poor performance on predicting one or two-quarter-ahead stock returns.
Resumo:
This thesis elaborates the creation of value in private equity and in particular analyzes value creation in 3G Capital’s acquisition of Burger King. In this sense, a specific model is applied that composes value creation into several drivers, in order to answer the question of how value creation can be addressed in private equity investments. Although previous research by Achleitner et al. (2010) introduced a specific model that addresses value creation in private equity, the respective model was neither applied to an individual company, nor linked to indirect drivers that explain the dynamics and rationales for the creation of value. In turn this paper applies the quantitative model to an ongoing private equity investment and thereby provides different extensions to turn the model into a better forecasting model for ongoing investments, instead of only analyzing a deal that has already been divested from an ex post perspective. The chosen research approach is a case study about the Burger King buyout that first includes an extensive review about the current status of academic literature, second a quantitative calculation and qualitative interpretation of different direct value drivers, third a qualitative breakdown of indirect drivers, and lastly a recapitulating discussion about value creation and value drivers. Presenting a very successful private equity investment and elaborately demonstrating the dynamics and mechanisms that drive value creation in this case, provides important implications for other private equity firms as well as public firms in order to develop their proprietary approach towards value creation.
Resumo:
Lawrance (1991) has shown, through the estimation of consumption Euler equations, that subjective rates of impatience (time preference) in the U.S. are three to Öve percentage points higher for households with lower average labor incomes than for those with higher labor income. From a theoretical perspective, the sign of this correlation in a job-search model seems at Örst to be undetermined, since more impatient workers tend to accept wage o§ers that less impatient workers would not, thereby remaining less time unemployed. The main result of this paper is showing that, regardless of the existence of e§ects of opposite sign, and independently of the particular speciÖcations of the givens of the model, less impatient workers always end up, in the long run, with a higher average income. The result is based on the (unique) invariant Markov distribution of wages associated with the dynamic optimization problem solved by the consumers. An example is provided to illustrate the method.