922 resultados para Insider econometrics
Resumo:
As adult height is a well-established retrospective measure of health and standard of living, it is important to understand the factors that determine it. Among them, the influence of socio-environmental factors has been subjected to empirical scrutiny. This paper explores the influence of generational (or environmental) effects and individual and gender-specific heterogeneity on adult height. Our data set is from contemporary Spain, a country governed by an authoritarian regime between 1939 and 1977. First, we use normal position and quantile regression analysis to identify the determinants of self-reported adult height and to measure the influence of individual heterogeneity. Second, we use a Blinder-Oaxaca decomposition approach to explain the `gender height gap¿ and its distribution, so as to measure the influence on this gap of individual heterogeneity. Our findings suggest a significant increase in adult height in the generations that benefited from the country¿s economic liberalization in the 1950s, and especially those brought up after the transition to democracy in the 1970s. In contrast, distributional effects on height suggest that only in recent generations has ¿height increased more among the tallest¿. Although the mean gender height gap is 11 cm, generational effects and other controls such as individual capabilities explain on average roughly 5% of this difference, a figure that rises to 10% in the lowest 10% quantile.
Resumo:
[eng] We consider a discrete time, pure exchange infinite horizon economy with two or more consumers and at least one concumption good per period. Within the framework of decentralized mechanisms, we show that for a given consumption trade at any period of time, say at time one, the consumers will need, in general, an infinite dimensional (informational) space to identigy such a trade as an intemporal Walrasian one. However, we show and characterize a set of enviroments where the Walrasian trades at each period of time can be achieved as the equilibrium trades of a sequence of decentralized competitive mechanisms, using only both current prices and quantities to coordinate decisions.
Resumo:
This paper is about determinants of migration at a local level. We use data from Catalan municipalities in order to understand what explains migration patterns trying to identify whether they are main explained by amenities or economic characteristics. We distinguish three typologies of migration in terms of distance travelled: short-distance, short-medium-distance and medium-distance and we hypothesize whether migration determinants vary across these groups. Our results show that, effectively, there are some noticeable differences, suggest that spatial issues must be taken into account and provide some insights for future research. Keywords: population dynamics, spatial econometrics. JEL codes: C21, R0, R23
Resumo:
This paper proposes new methodologies for evaluating out-of-sample forecastingperformance that are robust to the choice of the estimation window size. The methodologies involve evaluating the predictive ability of forecasting models over a wide rangeof window sizes. We show that the tests proposed in the literature may lack the powerto detect predictive ability and might be subject to data snooping across differentwindow sizes if used repeatedly. An empirical application shows the usefulness of themethodologies for evaluating exchange rate models' forecasting ability.
Resumo:
We explore the linkage between equity and commodity markets, focusing in particular on its evolution over time. We document that a country's equity market valuehas significant out-of-sample predictive ability for the future global commodity priceindex for several primary commodity-exporting countries. The out-of-sample predictive ability of the equity market appears around 2000s. The results are robust to usingseveral control variables as well as firm-level equity data. Finally, our results indicatethat exchange rates are a better predictor of commodity prices than equity markets,especially at very short horizons.
Resumo:
Many governments in developing countries implement programs that aim to address nutrionalfailures in early childhood, yet evidence on the effectiveness of these interventions is scant. Thispaper evaluates the impact of a conditional food supplementation program on child mortality inEcuador. The Programa de Alimentaci?n y Nutrici?n Nacional (PANN) 2000 was implementedby regular staff at local public health posts and consisted of offering a free micronutrient-fortifiedfood, Mi Papilla, for children aged 6 to 24 months in exchange for routine health check-ups forthe children. Our regression discontinuity design exploits the fact that at its inception, the PANN2000 was running for about 8 months only in the poorest communities (parroquias) of certainprovinces. Our main result is that the presence of the program reduced child mortality in cohortswith 8 months of differential exposure from a level of about 2.5 percent by 1 to 1.5 percentagepoints.
Resumo:
The Baby and the Couple provides an insider's view on how infant communication develops in the context of the family and how parents either work together as a team or struggle in the process. The authors present vignettes from everyday life as well as case studies from a longitudinal research project of infants and their parents interacting together in the Lausanne Trilogue Play (LTP), an assessment tool for very young families. Divided into three parts, the book focuses not only on the parents, but also on the infant's contribution to the family. Part 1 presents a case study of Lucas and his family, from infancy to age 5. With each chapter we see how, in the context of their families, infants learn to communicate with more than one person at a time. Part 2 explores how infants cope when their parents struggle to work together - excluding, competing or only connecting through their child. The authors follow several case examples from infancy through to early childhood to illustrate various forms of problematic co-parenting, along with the infant's derailed trajectory at different ages and stages. In Part 3, prevention and intervention models based on the LTP are presented. In addition to an overview of these programs, chapters are devoted to the Developmental Systems Consultation, which combines use of the LTP and video feedback, and a new model, Reflective Family Play, which allows whole families to engage in treatment. The Baby and the Couple is a vital resource for professionals working in the fields of infant and preschool mental health including psychiatrists, psychologists, social workers, family therapists and educators, as well as researchers.
Resumo:
This paper develops an approach to rank testing that nests all existing rank tests andsimplifies their asymptotics. The approach is based on the fact that implicit in every ranktest there are estimators of the null spaces of the matrix in question. The approach yieldsmany new insights about the behavior of rank testing statistics under the null as well as localand global alternatives in both the standard and the cointegration setting. The approach alsosuggests many new rank tests based on alternative estimates of the null spaces as well as thenew fixed-b theory. A brief Monte Carlo study illustrates the results.
Resumo:
Background:In January 2011 Spain modified clean air legislation in force since 2006, removing all existing exceptions applicable to hospitality venues. Although this legal reform was backed by all political parties with parliamentary representation, the government's initiative was contested by the tobacco industry and its allies in the hospitality industry. One of the most voiced arguments against the reform was its potentially disruptive effect on the revenue of hospitality venues. This paper evaluates the impact of this reform on household expenditure at restaurants and bars and cafeterias. Methods and empirical strategy:We use micro-data from the Encuesta de Presupuestos Familiares (EPF) for years 2006 to 2012 to estimate "two part" models where the probability of observing a positive expenditure and, for those who spend, the expected level of expenditure are functions of an array of explanatory variables. We apply a before-after analysis with a wide range of controls for confounding factors and a flexible modeling of time effects.Results:In line with the majority of studies that analyze the effects of smoking bans using objective data, our results suggest that the reform did not cause reductions in households' expenditures on restaurant services or on bars and cafeteria services.
Resumo:
Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.
Resumo:
The final year project came to us as an opportunity to get involved in a topic which has appeared to be attractive during the learning process of majoring in economics: statistics and its application to the analysis of economic data, i.e. econometrics.Moreover, the combination of econometrics and computer science is a very hot topic nowadays, given the Information Technologies boom in the last decades and the consequent exponential increase in the amount of data collected and stored day by day. Data analysts able to deal with Big Data and to find useful results from it are verydemanded in these days and, according to our understanding, the work they do, although sometimes controversial in terms of ethics, is a clear source of value added both for private corporations and the public sector. For these reasons, the essence of this project is the study of a statistical instrument valid for the analysis of large datasets which is directly related to computer science: Partial Correlation Networks.The structure of the project has been determined by our objectives through the development of it. At first, the characteristics of the studied instrument are explained, from the basic ideas up to the features of the model behind it, with the final goal of presenting SPACE model as a tool for estimating interconnections in between elements in large data sets. Afterwards, an illustrated simulation is performed in order to show the power and efficiency of the model presented. And at last, the model is put into practice by analyzing a relatively large data set of real world data, with the objective of assessing whether the proposed statistical instrument is valid and useful when applied to a real multivariate time series. In short, our main goals are to present the model and evaluate if Partial Correlation Network Analysis is an effective, useful instrument and allows finding valuable results from Big Data.As a result, the findings all along this project suggest the Partial Correlation Estimation by Joint Sparse Regression Models approach presented by Peng et al. (2009) to work well under the assumption of sparsity of data. Moreover, partial correlation networks are shown to be a very valid tool to represent cross-sectional interconnections in between elements in large data sets.The scope of this project is however limited, as there are some sections in which deeper analysis would have been appropriate. Considering intertemporal connections in between elements, the choice of the tuning parameter lambda, or a deeper analysis of the results in the real data application are examples of aspects in which this project could be completed.To sum up, the analyzed statistical tool has been proved to be a very useful instrument to find relationships that connect the elements present in a large data set. And after all, partial correlation networks allow the owner of this set to observe and analyze the existing linkages that could have been omitted otherwise.
Resumo:
This paper suggests a method for obtaining efficiency bounds in models containing either only infinite-dimensional parameters or both finite- and infinite-dimensional parameters (semiparametric models). The method is based on a theory of random linear functionals applied to the gradient of the log-likelihood functional and is illustrated by computing the lower bound for Cox's regression model
Resumo:
Panel data can be arranged into a matrix in two ways, called 'long' and 'wide' formats (LFand WF). The two formats suggest two alternative model approaches for analyzing paneldata: (i) univariate regression with varying intercept; and (ii) multivariate regression withlatent variables (a particular case of structural equation model, SEM). The present papercompares the two approaches showing in which circumstances they yield equivalent?insome cases, even numerically equal?results. We show that the univariate approach givesresults equivalent to the multivariate approach when restrictions of time invariance (inthe paper, the TI assumption) are imposed on the parameters of the multivariate model.It is shown that the restrictions implicit in the univariate approach can be assessed bychi-square difference testing of two nested multivariate models. In addition, commontests encountered in the econometric analysis of panel data, such as the Hausman test, areshown to have an equivalent representation as chi-square difference tests. Commonalitiesand differences between the univariate and multivariate approaches are illustrated usingan empirical panel data set of firms' profitability as well as a simulated panel data.