867 resultados para Panel-data econometrics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This interactive resource introduces Social Science students to recognition and interpretation of data contained in a table. The RLO uses data based on the causes of death of Rock and R&B musicians. When you view an object note that the panel on the left generated by the repository can be dragged sideways to view the learning object full screen. Item from RLO-CETL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A panel presentation at Repository Fringe 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper applies stationarity tests to examine evidence of market integration for a relatively large sample of food products in Colombia. We Önd little support for market integration when using the univariate KPSS tests for stationarity. However, within a panel context and after allowing for cross sectional dependence, the Hadri tests provide much more evidence supporting the view that food markets are integrated or, in other words, that the law of one price holds for most products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rationalizing non-participation as a resource deficiency in the household, this paper identifies strategies for milk-market development in the Ethiopian highlands. The additional amounts of covariates required for Positive marketable surplus -'distances-to market'-are computed from a model in which production and sales are correlated; sales are left-censored at some Unobserved thresholds production efficiencies are heterogeneous: and the data are in the form of a panel. Incorporating these features into the modeling exercise ant because they are fundamental to the data-generating environment. There are four reasons. First, because production and sales decisions are enacted within the same household, both decisions are affected by the same exogenous shocks, and production and sales are therefore likely to be correlated. Second. because selling, involves time and time is arguably the most important resource available to a subsistence household, the minimum Sales amount is not zero but, rather, some unobserved threshold that lies beyond zero. Third. the Potential existence of heterogeneous abilities in management, ones that lie latent from the econometrician's perspective, suggest that production efficiencies should be permitted to vary across households. Fourth, we observe a single set of households during multiple visits in a single production year. The results convey clearly that institutional and production) innovations alone are insufficient to encourage participation. Market-precipitating innovation requires complementary inputs, especially improvements in human capital and reductions in risk. Copyright (c) 20 08 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accompanying the call for increased evidence-based policy the developed world is implementing more longitudinal panel studies which periodically gather information about the same people over a number of years. Panel studies distinguish between transitory and persistent states (e.g. poverty, unemployment) and facilitate causal explanations of relationships between variables. However, they are complex and costly. A growing number of developing countries are now implementing or considering starting panel studies. The objectives of this paper are to identify challenges that arise in panel studies, and to give examples of how these have been addressed in resource-constrained environments. The main issues considered are: the development of a conceptual framework which links macro and micro contexts; sampling the cohort in a cost-effective way; tracking individuals; ethics and data management and analysis. Panel studies require long term funding, a stable institution and an acceptance that there will be limited value for money in terms of results from early stages, with greater benefits accumulating in the study's mature years. Copyright © 2003 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article considers young people's occupational choices at the age of 15 in relation to their educational attainment, the occupations of their parents and their actual occupations when they are in their early 20s. It uses data from the British Household Panel Survey over periods of between five and ten years. The young people in the survey are occupationally ambitious: many more aspire to professional, managerial and technical jobs than the likely availability of these occupations. In general ambitions and educational attainment and intentions are well aligned but there are also many instances of misalignment; either people wanting jobs which their educational attainments and intentions will not prepare them for, or people with less ambitious aspirations than their educational performance would justify. Children from more occupationally advantaged families are more ambitious, achieve better educationally and have better occupational outcomes than other children. However, where young people are both ambitious and educationally successful the occupational outcomes are as good for those from disadvantaged as advantaged families. In contrast, where young people are neither ambitious nor educationally successful, the outcomes for those from disadvantaged homes are very much poorer than for other young people. The article suggests that while choice is real it is also heavily constrained for many people. A possible educational implication of the study is that career interventions could be directed at under-ambitious but academically capable young people from disadvantaged backgrounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates whether using natural logarithms (logs) of price indices for forecasting inflation rates is preferable to employing the original series. Univariate forecasts for annual inflation rates for a number of European countries and the USA based on monthly seasonal consumer price indices are considered. Stochastic seasonality and deterministic seasonality models are used. In many cases, the forecasts based on the original variables result in substantially smaller root mean squared errors than models based on logs. In turn, if forecasts based on logs are superior, the gains are typically small. This outcome sheds doubt on the common practice in the academic literature to forecast inflation rates based on differences of logs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of methods of evaluating the validity of interval forecasts of financial data are analysed, and illustrated using intraday FTSE100 index futures returns. Some existing interval forecast evaluation techniques, such as the Markov chain approach of Christoffersen (1998), are shown to be inappropriate in the presence of periodic heteroscedasticity. Instead, we consider a regression-based test, and a modified version of Christoffersen's Markov chain test for independence, and analyse their properties when the financial time series exhibit periodic volatility. These approaches lead to different conclusions when interval forecasts of FTSE100 index futures returns generated by various GARCH(1,1) and periodic GARCH(1,1) models are evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the impact of data revisions on the forecast performance of a SETAR regime-switching model of U.S. output growth. The impact of data uncertainty in real-time forecasting will affect a model's forecast performance via the effect on the model parameter estimates as well as via the forecast being conditioned on data measured with error. We find that benchmark revisions do affect the performance of the non-linear model of the growth rate, and that the performance relative to a linear comparator deteriorates in real-time compared to a pseudo out-of-sample forecasting exercise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examine how the accuracy of real-time forecasts from models that include autoregressive terms can be improved by estimating the models on ‘lightly revised’ data instead of using data from the latest-available vintage. The benefits of estimating autoregressive models on lightly revised data are related to the nature of the data revision process and the underlying process for the true values. Empirically, we find improvements in root mean square forecasting error of 2–4% when forecasting output growth and inflation with univariate models, and of 8% with multivariate models. We show that multiple-vintage models, which explicitly model data revisions, require large estimation samples to deliver competitive forecasts. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dystrophin, the protein product of the Duchenne muscular dystrophy (DMD) gene, was studied in 19 patients with Xp21 disorders and in 25 individuals with non-Xp21 muscular dystrophy. Antibodies raised to seven different regions spanning most of the protein were used for immunocytochemistry. In all patients specific dystrophin staining anomalies were detected and correlated with clinical severity and also gene deletion. In patients with Becker muscular dystrophy (BMD) the anomalies detected ranged from inter- and intra-fibre variation in labelling intensity with the same antibody or several antibodies to general reduction in staining and discontinuous staining. In vitro evidence of abnormal dystrophin breakdown was observed reanalysing the muscle of patients, with BMD and not that of non-Xp21 dystrophies, after it has been stored for several months. A number of patients with DMD showed some staining but this did not represent a diagnostic problem. Based on the data presented, it was concluded that immunocytochemistry is a powerful technique in the prognostic diagnosis of Xp21 muscular dystrophies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Panel cointegration techniques applied to pooled data for 27 economies for the period 1960-2000 indicate that: i) government spending in education and innovation indicators are cointegrated; ii) education hierarchy is relevant when explaining innovation; and iii) the relation between education and innovation can be obtained after an accommodation of a level structural break.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)