990 resultados para aggregate data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this paper is to test for optimality of consumption decisions at the aggregate level (representative consumer) taking into account popular deviations from the canonical CRRA utility model rule of thumb and habit. First, we show that rule-of-thumb behavior in consumption is observational equivalent to behavior obtained by the optimizing model of King, Plosser and Rebelo (Journal of Monetary Economics, 1988), casting doubt on how reliable standard rule-of-thumb tests are. Second, although Carroll (2001) and Weber (2002) have criticized the linearization and testing of euler equations for consumption, we provide a deeper critique directly applicable to current rule-of-thumb tests. Third, we show that there is no reason why return aggregation cannot be performed in the nonlinear setting of the Asset-Pricing Equation, since the latter is a linear function of individual returns. Fourth, aggregation of the nonlinear euler equation forms the basis of a novel test of deviations from the canonical CRRA model of consumption in the presence of rule-of-thumb and habit behavior. We estimated 48 euler equations using GMM, with encouraging results vis-a-vis the optimality of consumption decisions. At the 5% level, we only rejected optimality twice out of 48 times. Empirical-test results show that we can still rely on the canonical CRRA model so prevalent in macroeconomics: out of 24 regressions, we found the rule-of-thumb parameter to be statistically signi cant at the 5% level only twice, and the habit ƴ parameter to be statistically signi cant on four occasions. The main message of this paper is that proper return aggregation is critical to study intertemporal substitution in a representative-agent framework. In this case, we fi nd little evidence of lack of optimality in consumption decisions, and deviations of the CRRA utility model along the lines of rule-of-thumb behavior and habit in preferences represent the exception, not the rule.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper tests the optimality of consumption decisions at the aggregate level taking into account popular deviations from the canonical constant-relative-risk-aversion (CRRA) utility function model-rule of thumb and habit. First, based on the critique in Carroll (2001) and Weber (2002) of the linearization and testing strategies using euler equations for consumption, we provide extensive empirical evidence of their inappropriateness - a drawback for standard rule- of-thumb tests. Second, we propose a novel approach to test for consumption optimality in this context: nonlinear estimation coupled with return aggregation, where rule-of-thumb behavior and habit are special cases of an all encompassing model. We estimated 48 euler equations using GMM. At the 5% level, we only rejected optimality twice out of 48 times. Moreover, out of 24 regressions, we found the rule-of-thumb parameter to be statistically significant only twice. Hence, lack of optimality in consumption decisions represent the exception, not the rule. Finally, we found the habit parameter to be statistically significant on four occasions out of 24.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Email exchange in 2013 between Kathryn Maxson (Duke) and Kris Wetterstrand (NHGRI), regarding country funding and other data for the HGP sequencing centers. Also includes the email request for such information, from NHGRI to the centers, in 2000, and the aggregate data collected.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a unique empirical analysis of the properties of the New Keynesian Phillips Curve using an international dataset of aggregate and disaggregate sectoral in ation. Our results from panel time-series estimation clearly indicate that sectoral heterogeneity has important consequences for aggregate in ation behaviour. Heterogeneity helps to explain the overestimation of in ation persistence and underestimation of the role of marginal costs in empirical investigations of the NKPC that use aggregate data. We nd that combining disaggregate information with heterogeneous-consistent estimation techniques helps to reconcile, to a large extent, the NKPC with the data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate the issue of whether there was a stable money demand function for Japan in 1990's using both aggregate and disaggregate time series data. The aggregate data appears to support the contention that there was no stable money demand function. The disaggregate data shows that there was a stable money demand function. Neither was there any indication of the presence of liquidity trapo Possible sources of discrepancy are explored and the diametrically opposite results between the aggregate and disaggregate analysis are attributed to the neglected heterogeneity among micro units. We also conduct simulation analysis to show that when heterogeneity among micro units is present. The prediction of aggregate outcomes, using aggregate data is less accurate than the prediction based on micro equations. Moreover. policy evaluation based on aggregate data can be grossly misleading.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper evaluates inflation targeting and assesses its merits by comparing alternative targets in a macroeconomic model. We use European aggregate data to evaluate the performance of alternative policy rules under alternative inflation targets in terms of output losses. We employ two major alternative policy rules, forward-looking and spontaneous adjustment, and three alternative inflation targets, zero percent, two percent, and four percent inflation rates. The simulation findings suggest that forward-looking rules contributed to macroeconomic stability and increase monetary policy credibility. The superiority of a positive inflation target, in terms of output losses, emerges for the aggregate data. The same methodology, when applied to individual countries, however, suggests that country-specific flexible inflation targeting can improve employment prospects in Europe.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The objective of this paper is to analyse to what extent the use of cross-section data will distort the estimated elasticities for car ownership demand when the observed variables do not correspond to a state equilibrium for some individuals in the sample. Our proposal consists of approximating the equilibrium values of the observed variables by constructing a pseudo-panel data set which entails averaging individuals observed at different points of time into cohorts. The results show that individual and aggregate data lead to almost the same value for income elasticity, whereas with respect to working adult elasticity the similarity is less pronounced.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The purpose of this thesis is to study factors that explain the bilateral fiber trade flows. This is done by analyzing bilateral trade flows during 1990-2006. It will be studied also, whether there are differences between fiber types. This thesis uses a gravity model approach to study the trade flows. Gravity model is mostly used to study the aggregate data between trading countries. In this thesis the gravity model is applied to single fibers. This model is then applied to panel data set. Results from the regression show clearly that there are benefits in studying different fibers in separate. The effects differ considerably from each other. Furthermore, this thesis speaks for the existence of Linder’s effect in certain fiber types.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We explore the relationship between quality in work and aggregate productivity in regions and sectors. Using recent Spanish aggregate data for the period 2001-2006, we find that quality in work may be an important factor to explain productivity levels in sectors and regions. We use two alternatives definitions of quality in work: one from survey data and the other from a social indicators approach. We also use two different measurements of labour productivity to test the robustness of our results. The estimates are run using a simultaneous equation model for our panel of data, and find important differences between high tech and low tech sectors: a positive relationship between quality in work and productivity in the former case, and a negative relationship in the latter. Consequently, on the one hand we see that quality in work is not only an objective per se, but may also be a production factor able to increase the wealth of regions; on the other hand, at the aggregate level, we may also find that high productivity levels coincide with lower quality in work conditions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We study constrained efficient aggregate risk sharing and its consequence for the behavior of macro-aggregates in a dynamic Mirrlees’s (1971) setting. Privately observed idiosyncratic productivity shocks are assumed to be independent of i.i.d. publicly observed aggregate shocks. Yet, private allocations display memory with respect to past aggregate shocks, when idosyncratic shocks are also i.i.d.. Under a mild restriction on the nature of optimal allocations the result extends to more persistent idiosyncratic shocks, for all but the limit at which idiosyncratic risk disappears, and the model collapses to a pure heterogeneity repeated Mirrlees economy identical to Werning [2007]. When preferences are iso-elastic we show that an allocation is memoryless only if it displays a strong form of separability with respect to aggregate shocks. Separability characterizes the pure heterogeneity limit as well as the general case with log preferences. With less than full persistence and risk aversion different from unity both memory and non-separability characterize optimal allocations. Exploiting the fact that non-separability is associated with state-varying labor wedges, we apply a business cycle accounting procedure (e.g. Chari et al. [2007]) to the aggregate data generated by the model. We show that, whenever risk aversion is great than one our model produces efficient counter-cyclical labor wedges.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Despite the extensive work on currency mismatches, research on the determinants and effects of maturity mismatches is scarce. In this paper I show that emerging market maturity mismatches are negatively affected by capital inflows and price volatilities. Furthermore, I find that banks with low maturity mismatches are more profitable during crisis periods but less profitable otherwise. The later result implies that banks face a tradeoff between higher returns and risk, hence channeling short term capital into long term loans is caused by cronyism and implicit guarantees rather than the depth of the financial market. The positive relationship between maturity mismatches and price volatility, on the other hand, shows that the banks of countries with high exchange rate and interest rate volatilities can not, or choose not to hedge themselves. These results follow from a panel regression on a data set I constructed by merging bank level data with aggregate data. This is advantageous over traditional studies which focus only on aggregate data.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

As for other complex diseases, linkage analyses of schizophrenia (SZ) have produced evidence for numerous chromosomal regions, with inconsistent results reported across studies. The presence of locus heterogeneity appears likely and may reduce the power of linkage analyses if homogeneity is assumed. In addition, when multiple heterogeneous datasets are pooled, intersample variation in the proportion of linked families ( a) may diminish the power of the pooled sample to detect susceptibility loci, in spite of the larger sample size obtained. We compare the significance of linkage. findings obtained using allele- sharing LOD scores ( LODexp) - which assume homogeneity - and heterogeneity LOD scores ( HLOD) in European American and African American NIMH SZ families. We also pool these two samples and evaluate the relative power of the LODexp and two different heterogeneity statistics. One of these ( HLOD- P) estimates the heterogeneity parameter a only in aggregate data, while the second ( HLOD- S) determines a separately for each sample. In separate and combined data, we show consistently improved performance of HLOD scores over LODexp. Notably, genome-wide significant evidence for linkage is obtained at chromosome 10p in the European American sample using a recessive HLOD score. When the two samples are combined, linkage at the 10p locus also achieves genome-wide significance under HLOD- S, but not HLOD- P. Using HLOD- S, improved evidence for linkage was also obtained for a previously reported region on chromosome 15q. In linkage analyses of complex disease, power may be maximised by routinely modelling locus heterogeneity within individual datasets, even when multiple datasets are combined to form larger samples.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

When applying multivariate analysis techniques in information systems and social science disciplines, such as management information systems (MIS) and marketing, the assumption that the empirical data originate from a single homogeneous population is often unrealistic. When applying a causal modeling approach, such as partial least squares (PLS) path modeling, segmentation is a key issue in coping with the problem of heterogeneity in estimated cause-and-effect relationships. This chapter presents a new PLS path modeling approach which classifies units on the basis of the heterogeneity of the estimates in the inner model. If unobserved heterogeneity significantly affects the estimated path model relationships on the aggregate data level, the methodology will allow homogenous groups of observations to be created that exhibit distinctive path model estimates. The approach will, thus, provide differentiated analytical outcomes that permit more precise interpretations of each segment formed. An application on a large data set in an example of the American customer satisfaction index (ACSI) substantiates the methodology’s effectiveness in evaluating PLS path modeling results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Morrell, Taylor and Kerr, from the University of Sydney's Department of Public Health, review the evidence of an association between unemployment and psychological and physical ill-health in young people aged 15-24 years. Aggregate data show youth unemployment and youth suicide to be strongly associated Youth unemployment is also associated with psychological symptoms, such as depression and loss of confidence. Effects on physical health have been less extensively studied; however, there is some evidence for an association with raised blood pressure. Finally, the prevalence of lifestyle risk factors (cannabis use and, less consistently tobacco and alcohol consumption) is higher in unemployed compared with employed young people.