871 resultados para panel data analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the groundwater favorability mapping on a fractured terrain in the eastern portion of Sao Paulo State, Brazil. Remote sensing, airborne geophysical data, photogeologic interpretation, geologic and geomorphologic maps and geographic information system (GIS) techniques have been used. The results of cross-tabulation between these maps and well yield data allowed groundwater prospective parameters in a fractured-bedrock aquifer. These prospective parameters are the base for the favorability analysis whose principle is based on the knowledge-driven method. The mutticriteria analysis (weighted linear combination) was carried out to give a groundwater favorabitity map, because the prospective parameters have different weights of importance and different classes of each parameter. The groundwater favorability map was tested by cross-tabulation with new well yield data and spring occurrence. The wells with the highest values of productivity, as well as all the springs occurrence are situated in the excellent and good favorabitity mapped areas. It shows good coherence between the prospective parameters and the well yield and the importance of GIS techniques for definition of target areas for detail study and wells location. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Enriquillo and Azuei are saltwater lakes located in a closed water basin in the southwestern region of the island of La Hispaniola, these have been experiencing dramatic changes in total lake-surface area coverage during the period 1980-2012. The size of Lake Enriquillo presented a surface area of approximately 276 km2 in 1984, gradually decreasing to 172 km2 in 1996. The surface area of the lake reached its lowest point in the satellite observation record in 2004, at 165 km2. Then the recent growth of the lake began reaching its 1984 size by 2006. Based on surface area measurement for June and July 2013, Lake Enriquillo has a surface area of ~358 km2. Sumatra sizes at both ends of the record are 116 km2 in 1984 and 134 km2in 2013, an overall 15.8% increase in 30 years. Determining the causes of lake surface area changes is of extreme importance due to its environmental, social, and economic impacts. The overall goal of this study is to quantify the changing water balance in these lakes and their catchment area using satellite and ground observations and a regional atmospheric-hydrologic modeling approach. Data analyses of environmental variables in the region reflect a hydrological unbalance of the lakes due to changing regional hydro-climatic conditions. Historical data show precipitation, land surface temperature and humidity, and sea surface temperature (SST), increasing over region during the past decades. Salinity levels have also been decreasing by more than 30% from previously reported baseline levels. Here we present a summary of the historical data obtained, new sensors deployed in the sourrounding sierras and the lakes, and the integrated modeling exercises. As well as the challenges of gathering, storing, sharing, and analyzing this large volumen of data in a remote location from such a diverse number of sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using the Pricing Equation, in a panel-data framework, we construct a novel consistent estimator of the stochastic discount factor (SDF) mimicking portfolio which relies on the fact that its logarithm is the ìcommon featureîin every asset return of the economy. Our estimator is a simple function of asset returns and does not depend on any parametric function representing preferences, making it suitable for testing di§erent preference speciÖcations or investigating intertemporal substitution puzzles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise based upon data from a well known survey is also presented. Overall, theoretical and empirical results show promise for the feasible bias-corrected average forecast.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular it delivers a zero-limiting mean-squared error if the number of forecasts and the number of post-sample time periods is sufficiently large. We also develop a zero-mean test for the average bias. Monte-Carlo simulations are conducted to evaluate the performance of this new technique in finite samples. An empirical exercise, based upon data from well known surveys is also presented. Overall, these results show promise for the bias-corrected average forecast.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise, based upon data from a well known survey is also presented. Overall, these results show promise for the feasible bias-corrected average forecast.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using the Pricing Equation in a panel-data framework, we construct a novel consistent estimator of the stochastic discount factor (SDF) which relies on the fact that its logarithm is the "common feature" in every asset return of the economy. Our estimator is a simple function of asset returns and does not depend on any parametric function representing preferences. The techniques discussed in this paper were applied to two relevant issues in macroeconomics and finance: the first asks what type of parametric preference-representation could be validated by asset-return data, and the second asks whether or not our SDF estimator can price returns in an out-of-sample forecasting exercise. In formal testing, we cannot reject standard preference specifications used in the macro/finance literature. Estimates of the relative risk-aversion coefficient are between 1 and 2, and statistically equal to unity. We also show that our SDF proxy can price reasonably well the returns of stocks with a higher capitalization level, whereas it shows some difficulty in pricing stocks with a lower level of capitalization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper estimates the elasticity of substitution of an aggregate production function. The estimating equation is derived from the steady state of a neoclassical growth model. The data comes from the PWT in which different countries face different relative prices of the investment good and exhibit different investment-output ratios. Then, using this variation we estimate the elasticity of substitution. The novelty of our approach is that we use dynamic panel data techniques, which allow us to distinguish between the short and the long run elasticity and handle a host of econometric and substantive issues. In particular we accommodate the possibility that different countries have different total factor productivities and other country specific effects and that such effects are correlated with the regressors. We also accommodate the possibility that the regressors are correlated with the error terms and that shocks to regressors are manifested in future periods. Taking all this into account our estimation resuIts suggest that the Iong run eIasticity of substitution is 0.7, which is Iower than the eIasticity that had been used in previous macro-deveIopment exercises. We show that this lower eIasticity reinforces the power of the neoclassical mo deI to expIain income differences across countries as coming from differential distortions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the issue of whether there was a stable money demand function for Japan in 1990's using both aggregate and disaggregate time series data. The aggregate data appears to support the contention that there was no stable money demand function. The disaggregate data shows that there was a stable money demand function. Neither was there any indication of the presence of liquidity trapo Possible sources of discrepancy are explored and the diametrically opposite results between the aggregate and disaggregate analysis are attributed to the neglected heterogeneity among micro units. We also conduct simulation analysis to show that when heterogeneity among micro units is present. The prediction of aggregate outcomes, using aggregate data is less accurate than the prediction based on micro equations. Moreover. policy evaluation based on aggregate data can be grossly misleading.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the role of consumption-wealth ratio on predicting future stock returns through a panel approach. We follow the theoretical framework proposed by Lettau and Ludvigson (2001), in which a model derived from a nonlinear consumer’s budget constraint is used to settle the link between consumption-wealth ratio and stock returns. Using G7’s quarterly aggregate and financial data ranging from the first quarter of 1981 to the first quarter of 2014, we set an unbalanced panel that we use for both estimating the parameters of the cointegrating residual from the shared trend among consumption, asset wealth and labor income, cay, and performing in and out-of-sample forecasting regressions. Due to the panel structure, we propose different methodologies of estimating cay and making forecasts from the one applied by Lettau and Ludvigson (2001). The results indicate that cay is in fact a strong and robust predictor of future stock return at intermediate and long horizons, but presents a poor performance on predicting one or two-quarter-ahead stock returns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using the theoretical framework of Lettau and Ludvigson (2001), we perform an empirical investigation on how widespread is the predictability of cay {a modi ed consumption-wealth ratio { once we consider a set of important countries from a global perspective. We chose to work with the set of G7 countries, which represent more than 64% of net global wealth and 46% of global GDP at market exchange rates. We evaluate the forecasting performance of cay using a panel-data approach, since applying cointegration and other time-series techniques is now standard practice in the panel-data literature. Hence, we generalize Lettau and Ludvigson's tests for a panel of important countries. We employ macroeconomic and nancial quarterly data for the group of G7 countries, forming an unbalanced panel. For most countries, data is available from the early 1990s until 2014Q1, but for the U.S. economy it is available from 1981Q1 through 2014Q1. Results of an exhaustive empirical investigation are overwhelmingly in favor of the predictive power of cay in forecasting future stock returns and excess returns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper a set of Brazilian commercial gasoline representative samples from São Paulo State, selected by HCA, plus six samples obtained directly from refineries were analysed by a high-sensitive gas chromatographic (GC) method ASTM D6733. The levels of saturated hydrocarbons and anhydrous ethanol obtained by GC were correlated with the quality obtained from Brazilian Government Petroleum, Natural Gas and Biofuels Agency (ANP) specifications through exploratory analysis (HCA and PCA). This correlation showed that the GC method, together with HCA and PCA, could be employed as a screening technique to determine compliance with the prescribed legal standards of Brazilian gasoline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear mixed effects models are frequently used to analyse longitudinal data, due to their flexibility in modelling the covariance structure between and within observations. Further, it is easy to deal with unbalanced data, either with respect to the number of observations per subject or per time period, and with varying time intervals between observations. In most applications of mixed models to biological sciences, a normal distribution is assumed both for the random effects and for the residuals. This, however, makes inferences vulnerable to the presence of outliers. Here, linear mixed models employing thick-tailed distributions for robust inferences in longitudinal data analysis are described. Specific distributions discussed include the Student-t, the slash and the contaminated normal. A Bayesian framework is adopted, and the Gibbs sampler and the Metropolis-Hastings algorithms are used to carry out the posterior analyses. An example with data on orthodontic distance growth in children is discussed to illustrate the methodology. Analyses based on either the Student-t distribution or on the usual Gaussian assumption are contrasted. The thick-tailed distributions provide an appealing robust alternative to the Gaussian process for modelling distributions of the random effects and of residuals in linear mixed models, and the MCMC implementation allows the computations to be performed in a flexible manner.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, initial crystallographic studies of human haemoglobin (Hb) crystallized in isoionic and oxygen-free PEG solution are presented. Under these conditions, functional measurements of the O-2-linked binding of water molecules and release of protons have evidenced that Hb assumes an unforeseen new allosteric conformation. The determination of the high-resolution structure of the crystal of human deoxy-Hb fully stripped of anions may provide a structural explanation for the role of anions in the allosteric properties of Hb and, particularly, for the influence of chloride on the Bohr effect, the mechanism by which Hb oxygen affinity is regulated by pH. X-ray diffraction data were collected to 1.87 Angstrom resolution using a synchrotron-radiation source. Crystals belong to the space group P2(1)2(1)2 and preliminary analysis revealed the presence of one tetramer in the asymmetric unit. The structure is currently being refined using maximum-likelihood protocols.