129 resultados para Panel-data econometrics
Resumo:
This paper examines the extent to which innovative Spanish firms pursue improvements in energy efficiency (EE) as an objective of innovation. The increase in energy consumption and its impact on greenhouse gas emissions justifies the greater attention being paid to energy efficiency and especially to industrial EE. The ability of manufacturing companies to innovate and improve their EE has a substantial influence on attaining objectives regarding climate change mitigation. Despite the effort to design more efficient energy policies, the EE determinants in manufacturing firms have been little studied in the empirical literature. From an exhaustive sample of Spanish manufacturing firms and using a logit model, we examine the energy efficiency determinants for those firms that have innovated. To carry out the econometric analysis, we use panel data from the Community Innovation Survey for the period 2008‐2011. Our empirical results underline the role of size among the characteristics of firms that facilitate energy efficiency innovation. Regarding company behaviour, firms that consider the reduction of environmental impacts to be an important objective of innovation and that have introduced organisational innovations are more likely to innovate with the objective of increasing energy efficiency. Keywords: energy efficiency, corporate targets, innovation, Community Innovation Survey. JEL Classification: Q40, Q55, O31
Resumo:
This paper analyses the impact of Free Trade Agreements (FTAs) on Middle East and North African Countries (MENA) trade for the period 1994-2010. The analysis distinguishes between industrial and agricultural trade to take into account the different liberalisation schedules. An augmented gravity model is estimated using up-to-date panel data techniques to control for all time-invariant bilateral factors that influence bilateral trade as well as for the so-called multilateral resistance factors. We also control for the endogeneity of the agreements and test for self-selection bias due to the presence of zero trade in our sample. The main findings indicate that North-South-FTAs and South-South- FTAs have a differential impact in terms of increasing trade in MENA countries, with the former being more beneficial in terms of exports for MENA countries, but both showing greater global market integration. We also find that FTAs that include agricultural products, in which MENA countries have a clear comparative advantage, have more favourable effects for these countries than those only including industrial products. JEL code: F10, F15
Resumo:
relationship between productivity and international position of Spanish chemical firms in the period 2005-2011. The goal is to determine whether companies that follow and international strategy, either with exports or by investment in foreign countries obtain greater productivity growth than these that do not operate in global market. For this purpose a panel data set with microdata has been created. A preliminary analysis of the evolution of productivity growth in the sector is carried out. The measurement of Total Factor Productivity is performed. With the estimated TFP we analyze the differentials in productivity growth, comparing the effects of export and investment behavior with non-international firms.
Resumo:
We explore the determinants of usage of six different types of health care services, using the Medical Expenditure Panel Survey data, years 1996-2000. We apply a number of models for univariate count data, including semiparametric, semi-nonparametric and finite mixture models. We find that the complexity of the model that is required to fit the data well depends upon the way in which the data is pooled across sexes and over time, and upon the characteristics of the usage measure. Pooling across time and sexes is almost always favored, but when more heterogeneous data is pooled it is often the case that a more complex statistical model is required.
Resumo:
We use historical data that cover more than one century on real GDP for industrial countries and employ the Pesaran panel unit root test that allows for cross-sectional dependence to test for a unit root on real GDP. We find strong evidence against the unit root null. Our results are robust to the chosen group of countries and the sample period. Key words: real GDP stationarity, cross-sectional dependence, CIPS test. JEL Classification: C23, E32
Resumo:
Este trabajo tiene por objetivo evaluar la fiabilidad de los datos de renta procedentes de la primera Ola del PHOGUE (Panel de Hogares de la Unión Europea) de 1994, versión nacional del Production Data
Resumo:
It is common in econometric applications that several hypothesis tests arecarried out at the same time. The problem then becomes how to decide whichhypotheses to reject, accounting for the multitude of tests. In this paper,we suggest a stepwise multiple testing procedure which asymptoticallycontrols the familywise error rate at a desired level. Compared to relatedsingle-step methods, our procedure is more powerful in the sense that itoften will reject more false hypotheses. In addition, we advocate the useof studentization when it is feasible. Unlike some stepwise methods, ourmethod implicitly captures the joint dependence structure of the teststatistics, which results in increased ability to detect alternativehypotheses. We prove our method asymptotically controls the familywise errorrate under minimal assumptions. We present our methodology in the context ofcomparing several strategies to a common benchmark and deciding whichstrategies actually beat the benchmark. However, our ideas can easily beextended and/or modied to other contexts, such as making inference for theindividual regression coecients in a multiple regression framework. Somesimulation studies show the improvements of our methods over previous proposals. We also provide an application to a set of real data.
Resumo:
We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.
Resumo:
Many factors inhibiting and facilitating economic growth havebeen suggested. Can agnostics rely on international incomedata to tell them which matter? We find that agnostic priorslead to conclusions that are sensitive to differences acrossavailable income estimates. For example, the PWT 6.2 revisionof the 1960-96 income estimates in the PWT 6.1 leads tosubstantial changes regarding the role of government,international trade, demography, and geography. We concludethat margins of error in international income estimates appeartoo large for agnostic growth empirics.
Resumo:
This paper investigates the link between brand performance and cultural primes in high-risk,innovation-based sectors. In theory section, we propose that the level of cultural uncertaintyavoidance embedded in a firm determine its marketing creativity by increasing the complexityand the broadness of a brand. It determines also the rate of firm product innovations.Marketing creativity and product innovation influence finally the firm marketingperformance. Empirically, we study trademarked promotion in the Software Security Industry(SSI). Our sample consists of 87 firms that are active in SSI from 11 countries in the period1993-2000. We use the data coming from SSI-related trademarks registered by these firms,ending up with 2,911 SSI-related trademarks and a panel of 18,213 observations. We estimatea two stage model in which first we predict the complexity and the broadness of a trademarkas a measure of marketing creativity and the rate of product innovations. Among severalcontrol variables, our variable of theoretical interest is the Hofstede s uncertainty avoidancecultural index. Then, we estimate the trademark duration with a hazard model using thepredicted complexity and broadness as well as the rate of product innovations, along with thesame control variables. Our evidence confirms that the cultural avoidance affects the durationof the trademarks through the firm marketing creativity and product innovation.
Resumo:
This paper presents and estimates a dynamic choice model in the attribute space considering rational consumers. In light of the evidence of several state-dependence patterns, the standard attribute-based model is extended by considering a general utility function where pure inertia and pure variety-seeking behaviors can be explained in the model as particular linear cases. The dynamics of the model are fully characterized by standard dynamic programming techniques. The model presents a stationary consumption pattern that can be inertial, where the consumer only buys one product, or a variety-seeking one, where the consumer shifts among varied products.We run some simulations to analyze the consumption paths out of the steady state. Underthe hybrid utility assumption, the consumer behaves inertially among the unfamiliar brandsfor several periods, eventually switching to a variety-seeking behavior when the stationary levels are approached. An empirical analysis is run using scanner databases for three different product categories: fabric softener, saltine cracker, and catsup. Non-linear specifications provide the best fit of the data, as hybrid functional forms are found in all the product categories for most attributes and segments. These results reveal the statistical superiority of the non-linear structure and confirm the gradual trend to seek variety as the level of familiarity with the purchased items increases.
Resumo:
Correspondence analysis has found extensive use in ecology, archeology, linguisticsand the social sciences as a method for visualizing the patterns of association in a table offrequencies or nonnegative ratio-scale data. Inherent to the method is the expression of the datain each row or each column relative to their respective totals, and it is these sets of relativevalues (called profiles) that are visualized. This relativization of the data makes perfect sensewhen the margins of the table represent samples from sub-populations of inherently differentsizes. But in some ecological applications sampling is performed on equal areas or equalvolumes so that the absolute levels of the observed occurrences may be of relevance, in whichcase relativization may not be required. In this paper we define the correspondence analysis ofthe raw unrelativized data and discuss its properties, comparing this new method to regularcorrespondence analysis and to a related variant of non-symmetric correspondence analysis.
Resumo:
We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.
Resumo:
This paper presents a method for the measurement of changes in health inequality and income-related health inequality over time in a population.For pure health inequality (as measured by the Gini coefficient) andincome-related health inequality (as measured by the concentration index),we show how measures derived from longitudinal data can be related tocross section Gini and concentration indices that have been typicallyreported in the literature to date, along with measures of health mobilityinspired by the literature on income mobility. We also show how thesemeasures of mobility can be usefully decomposed into the contributions ofdifferent covariates. We apply these methods to investigate the degree ofincome-related mobility in the GHQ measure of psychological well-being inthe first nine waves of the British Household Panel Survey (BHPS). Thisreveals that dynamics increase the absolute value of the concentrationindex of GHQ on income by 10%.
Resumo:
A biplot, which is the multivariate generalization of the two-variable scatterplot, can be used to visualize the results of many multivariate techniques, especially those that are based on the singular value decomposition. We consider data sets consisting of continuous-scale measurements, their fuzzy coding and the biplots that visualize them, using a fuzzy version of multiple correspondence analysis. Of special interest is the way quality of fit of the biplot is measured, since it is well-known that regular (i.e., crisp) multiple correspondence analysis seriously under-estimates this measure. We show how the results of fuzzy multiple correspondence analysis can be defuzzified to obtain estimated values of the original data, and prove that this implies an orthogonal decomposition of variance. This permits a measure of fit to be calculated in the familiar form of a percentage of explained variance, which is directly comparable to the corresponding fit measure used in principal component analysis of the original data. The approach is motivated initially by its application to a simulated data set, showing how the fuzzy approach can lead to diagnosing nonlinear relationships, and finally it is applied to a real set of meteorological data.