44 resultados para non-stationary panel data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes the use of an improved covariate unit root test which exploits the cross-sectional dependence information when the panel data null hypothesis of a unit root is rejected. More explicitly, to increase the power of the test, we suggest the utilization of more than one covariate and offer several ways to select the ‘best’ covariates from the set of potential covariates represented by the individuals in the panel. Employing our methods, we investigate the Prebish-Singer hypothesis for nine commodity prices. Our results show that this hypothesis holds for all but the price of petroleum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical downscaling (SD) methods have become a popular, low-cost and accessible means of bridging the gap between the coarse spatial resolution at which climate models output climate scenarios and the finer spatial scale at which impact modellers require these scenarios, with various different SD techniques used for a wide range of applications across the world. This paper compares the Generator for Point Climate Change (GPCC) model and the Statistical DownScaling Model (SDSM)—two contrasting SD methods—in terms of their ability to generate precipitation series under non-stationary conditions across ten contrasting global climates. The mean, maximum and a selection of distribution statistics as well as the cumulative frequencies of dry and wet spells for four different temporal resolutions were compared between the models and the observed series for a validation period. Results indicate that both methods can generate daily precipitation series that generally closely mirror observed series for a wide range of non-stationary climates. However, GPCC tends to overestimate higher precipitation amounts, whilst SDSM tends to underestimate these. This infers that GPCC is more likely to overestimate the effects of precipitation on a given impact sector, whilst SDSM is likely to underestimate the effects. GPCC performs better than SDSM in reproducing wet and dry day frequency, which is a key advantage for many impact sectors. Overall, the mixed performance of the two methods illustrates the importance of users performing a thorough validation in order to determine the influence of simulated precipitation on their chosen impact sector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Successful innovation depends on knowledge – technological, strategic and market related. In this paper we explore the role and interaction of firms’ existing knowledge stocks and current knowledge flows in shaping innovation success. The paper contributes to our understanding of the determinants of firms’ innovation outputs and provides new information on the relationship between knowledge stocks, as measured by patents, and innovation output indicators. Our analysis uses innovation panel data relating to plants’ internal knowledge creation, external knowledge search and innovation outputs. Firm-level patent data is matched with this plant-level innovation panel data to provide a measure of firms’ knowledge stock. Two substantive conclusions follow. First, existing knowledge stocks have weak negative rather than positive impacts on firms’ innovation outputs, reflecting potential core-rigidities or negative path dependencies rather than the accumulation of competitive advantages. Second, knowledge flows derived from internal investment and external search dominate the effect of existing knowledge stocks on innovation performance. Both results emphasize the importance of firms’ knowledge search strategies. Our results also re-emphasize the potential issues which arise when using patents as a measure of innovation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we re-examine two important aspects of the dynamics of relative primary commodity prices, namely the secular trend and the short run volatility. To do so, we employ 25 series, some of them starting as far back as 1650 and powerful panel data stationarity tests that allow for endogenous multiple structural breaks. Results show that all the series are stationary after allowing for endogenous multiple breaks. Test results on the Prebisch–Singer hypothesis, which states that relative commodity prices follow a downward secular trend, are mixed but with a majority of series showing negative trends. We also make a first attempt at identifying the potential drivers of the structural breaks. We end by investigating the dynamics of the volatility of the 25 relative primary commodity prices also allowing for endogenous multiple breaks. We describe the often time-varying volatility in commodity prices and show that it has increased in recent years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is widely believed that work-related training increases a worker’s probability of moving up the job-quality ladder. This is usually couched in terms of effects on wages, but it has also been argued that training increases the probability of moving from non-permanent forms of employment to more permanent employment. This hypothesis is tested using nationally representative panel data for Australia, a country where the incidence of non-permanent employment, and especially casual employment, is high by international standards. While a positive association between participation in work-related training and the subsequent probability of moving from either casual or fixed-term contract employment to permanent employment is observed among men, this is shown to be driven not by a causal impact of training on transitions but by differences between those who do and do not receive training; i.e., selection bias.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we extend the heterogeneous panel data stationarity test of Hadri [Econometrics Journal, Vol. 3 (2000) pp. 148–161] to the cases where breaks are taken into account. Four models with different patterns of breaks under the null hypothesis are specified. Two of the models have been already proposed by Carrion-i-Silvestre et al.[Econometrics Journal,Vol. 8 (2005) pp. 159–175]. The moments of the statistics corresponding to the four models are derived in closed form via characteristic functions.We also provide the exact moments of a modified statistic that do not asymptotically depend on the location of the break point under the null hypothesis. The cases where the break point is unknown are also considered. For the model with breaks in the level and no time trend and for the model with breaks in the level and in the time trend, Carrion-i-Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175]showed that the number of breaks and their positions may be allowed to differ acrossindividuals for cases with known and unknown breaks. Their results can easily be extended to the proposed modified statistic. The asymptotic distributions of all the statistics proposed are derived under the null hypothesis and are shown to be normally distributed. We show by simulations that our suggested tests have in general good performance in finite samples except the modified test. In an empirical application to the consumer prices of 22 OECD countries during the period from 1953 to 2003, we found evidence of stationarity once a structural break and cross-sectional dependence are accommodated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hidden Markov models (HMMs) are widely used models for sequential data. As with other probabilistic graphical models, they require the specification of precise probability values, which can be too restrictive for some domains, especially when data are scarce or costly to acquire. We present a generalized version of HMMs, whose quantification can be done by sets of, instead of single, probability distributions. Our models have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. Efficient inference algorithms are developed to address standard HMM usage such as the computation of likelihoods and most probable explanations. Experiments with real data show that the use of imprecise probabilities leads to more reliable inferences without compromising efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: HIV testing is a cornerstone of efforts to combat the HIV epidemic, and testing conducted as part of surveillance provides invaluable data on the spread of infection and the effectiveness of campaigns to reduce the transmission of HIV. However, participation in HIV testing can be low, and if respondents systematically select not to be tested because they know or suspect they are HIV positive (and fear disclosure), standard approaches to deal with missing data will fail to remove selection bias. We implemented Heckman-type selection models, which can be used to adjust for missing data that are not missing at random, and established the extent of selection bias in a population-based HIV survey in an HIV hyperendemic community in rural South Africa.

Methods: We used data from a population-based HIV survey carried out in 2009 in rural KwaZulu-Natal, South Africa. In this survey, 5565 women (35%) and 2567 men (27%) provided blood for an HIV test. We accounted for missing data using interviewer identity as a selection variable which predicted consent to HIV testing but was unlikely to be independently associated with HIV status. Our approach involved using this selection variable to examine the HIV status of residents who would ordinarily refuse to test, except that they were allocated a persuasive interviewer. Our copula model allows for flexibility when modelling the dependence structure between HIV survey participation and HIV status.

Results: For women, our selection model generated an HIV prevalence estimate of 33% (95% CI 27–40) for all people eligible to consent to HIV testing in the survey. This estimate is higher than the estimate of 24% generated when only information from respondents who participated in testing is used in the analysis, and the estimate of 27% when imputation analysis is used to predict missing data on HIV status. For men, we found an HIV prevalence of 25% (95% CI 15–35) using the selection model, compared to 16% among those who participated in testing, and 18% estimated with imputation. We provide new confidence intervals that correct for the fact that the relationship between testing and HIV status is unknown and requires estimation.

Conclusions: We confirm the feasibility and value of adopting selection models to account for missing data in population-based HIV surveys and surveillance systems. Elements of survey design, such as interviewer identity, present the opportunity to adopt this approach in routine applications. Where non-participation is high, true confidence intervals are much wider than those generated by standard approaches to dealing with missing data suggest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hidden Markov models (HMMs) are widely used probabilistic models of sequential data. As with other probabilistic models, they require the specification of local conditional probability distributions, whose assessment can be too difficult and error-prone, especially when data are scarce or costly to acquire. The imprecise HMM (iHMM) generalizes HMMs by allowing the quantification to be done by sets of, instead of single, probability distributions. iHMMs have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. In this paper, we consider iHMMs under the strong independence interpretation, for which we develop efficient inference algorithms to address standard HMM usage such as the computation of likelihoods and most probable explanations, as well as performing filtering and predictive inference. Experiments with real data show that iHMMs produce more reliable inferences without compromising the computational efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy efficiency improvement has been a key objective of China’s long-term energy policy. In this paper, we derive single-factor technical energy efficiency (abbreviated as energy efficiency) in China from multi-factor efficiency estimated by means of a translog production function and a stochastic frontier model on the basis of panel data on 29 Chinese provinces over the period 2003–2011. We find that average energy efficiency has been increasing over the research period and that the provinces with the highest energy efficiency are at the east coast and the ones with the lowest in the west, with an intermediate corridor in between. In the analysis of the determinants of energy efficiency by means of a spatial Durbin error model both factors in the own province and in first-order neighboring provinces are considered. Per capita income in the own province has a positive effect. Furthermore, foreign direct investment and population density in the own province and in neighboring provinces have positive effects, whereas the share of state-owned enterprises in Gross Provincial Product in the own province and in neighboring provinces has negative effects. From the analysis it follows that inflow of foreign direct investment and reform of state-owned enterprises are important policy handles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports the findings from a discrete-choice experiment designed to estimate the economic benefits associated with rural landscape improvements in Ireland. Using a mixed logit model, the panel nature of the dataset is exploited to retrieve willingness-to-pay values for every individual in the sample. This departs from customary approaches in which the willingness-to-pay estimates are normally expressed as measures of central tendency of an a priori distribution. Random-effects models for panel data are subsequently used to identify the determinants of the individual-specific willingness-to-pay estimates. In comparison with the standard methods used to incorporate individual-specific variables into the analysis of discrete-choice experiments, the analytical approach outlined in this paper is shown to add considerable explanatory power to the welfare estimates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evidence on the persistence of innovation sheds light on the nature of the innovation process and can guide appropriate policy development. This paper examines innovation persistence in Ireland and Northern Ireland using complementary quantitative and case-study approaches. Panel data derived from innovation surveys is used, and suggests very different results to previous analyses of innovation persistence primarily based on patents data. Product and process innovation are found to exhibit strong general persistence but we find no evidence that persistence is stronger among highly active innovators. Our quantitative evidence is most strongly consistent with a process of cumulative accumulation at plant level. Our case-studies highlight a number of factors which can either interrupt or stimulate this process including market volatility, plants’ organisational context and regulatory changes. Notably, however, the balance of influences on product and process innovation persistence differs, with product innovation persistence linked more strongly to strategic factors and process changes more often driven by market pressures.
© 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.